Description
stringlengths
18
161k
Code
stringlengths
15
300k
c 20042006 python software foundation s baxter wouters and warsaw contact emailsigpython org feedparser an email feed parser the feed parser implements an interface for incrementally parsing an email message line by line this has advantages for certain applications such as those reading email messages off a socket feedparser feed is the primary interface for pushing new data into the parser it returns when there s nothing more it can do with the available data when you have no more data to push into the parser call close this completes the parsing and returns the root message object the other advantage of this parser is that it will never raise a parsing exception instead when it finds something unexpected it adds a defect to the current message defects are just instances that live on the message object s defects attribute rfc 2822 3 6 8 optional fields ftext is d3357 d59126 any character except controls sp and a fileish object that can have new data loaded into it you can also push and pop linematching predicates onto a stack when the current predicate matches the current line a false eof response i e empty string is returned instead this lets the parser adhere to a simple abstraction it parses until eof closes the current message text stream of the last partial line pushed into this object see issue 22233 for why this is a text stream and not a list a deque of full pushed lines the stack of falseeof checking predicates a flag indicating whether the file has been closed or not don t forget any trailing partial line pop the line off the stack and see if it matches the current falseeof predicate rfc 2046 section 5 1 2 requires us to recognize outer level boundaries at any level of inner nesting do this but be sure it s in the order of most to least nested we re at the false eof but push the last line back first let the consumer push a line back into the buffer push some new data into this object self partial writedata if n not in data and r not in data no new complete lines wait for more return crack into lines preserving the linesep characters self partial seek0 parts self partial readlines self partial seek0 self partial truncate if the last element of the list does not end in a newline then treat it as a partial line we only check for n here because a line ending with r might be a line that was split in the middle of a rn sequence see bugs 1555570 and 1721862 if not parts1 endswith n self partial writeparts pop self pushlinesparts def pushlinesself lines self lines extendlines def iterself return self def nextself line self readline if line raise stopiteration return line class feedparser factory is called with no arguments to create a new message obj the policy keyword specifies a policy object that controls a number of aspects of the parser s operation the default policy maintains backward compatibility assume this is an oldstyle factory nonpublic interface for supporting parser s headersonly flag push more data into the parser self input pushdata self callparse def callparseself try self parse except stopiteration pass def closeself look for final set of defects create a new message and start by parsing headers collect the headers searching for a line that doesn t match the rfc 2822 header or continuation pattern including an empty line if we saw the rfc defined headerbody separator i e newline just throw it away otherwise the line is part of the body so push it back done with the headers so parse them and figure out what we re supposed to see in the body of the message headersonly parsing is a backwards compatibility hack which was necessary in the older parser which could raise errors all remaining lines in the input are thrown into the message body messagedeliverystatus contains blocks of headers separated by a blank line we ll represent each header block as a separate nested message object but the processing is a bit different than standard message types because there is no body for the nested messages a blank line separates the subparts we need to pop the eof matcher in order to tell if we re at the end of the current file not the end of the last block of message headers the input stream must be sitting at the newline or at the eof we want to see if we re at the end of this subpart so first consume the blank line then test the next line to see if we re at this subpart s eof not at eof so this is a line we re going to need the message claims to be a message type then what follows is another rfc 2822 message the message claims to be a multipart but it has not defined a boundary that s a problem which we ll handle by reading everything until the eof and marking the message as defective make sure a valid content type was specified per rfc 2045 6 4 create a line match predicate which matches the interpart boundary as well as the endofmultipart boundary don t push this onto the input stream until we ve scanned past the preamble if we re looking at the end boundary we re done with this multipart if there was a newline at the end of the closing boundary then we need to initialize the epilogue with the empty string see below we saw an interpart boundary were we in the preamble according to rfc 2046 the last newline belongs to the boundary we saw a boundary separating two parts consume any multiple boundary lines that may be following our interpretation of rfc 2046 bnf grammar does not produce body parts within such double boundaries recurse to parse this subpart the input stream points at the subpart s first line because of rfc 2046 the newline preceding the boundary separator actually belongs to the boundary not the previous subpart s payload or epilogue if the previous part is a multipart set the multipart up for newline cleansing which will happen if we re in a nested multipart i think we must be in the preamble we ve seen either the eof or the end boundary if we re still capturing the preamble we never saw the start boundary note that as a defect and store the captured text as the payload if we re not processing the preamble then we might have seen eof without seeing that end boundary that is also a defect everything from here to the eof is epilogue if the end boundary ended in a newline we ll need to make sure the epilogue isn t none any crlf at the front of the epilogue is not technically part of the epilogue also watch out for an empty string epilogue which means a single newline otherwise it s some nonmultipart type so the entire rest of the file contents becomes the payload passed a list of lines that make up the headers for the current msg check for continuation the first line of the headers was a continuation this is illegal so let s note the defect store the illegal line and ignore it for purposes of headers check for envelope header i e unixfrom strip off the trailing newline something looking like a unixfrom at the end it s probably the first line of the body so push back the line and stop weirdly placed unixfrom line note this as a defect and ignore it split the line on the colon separating field name from value there will always be a colon because if there wasn t the part of the parser that calls us would have started parsing the body if the colon is on the start of the line the header is clearly malformed but we might be able to salvage the rest of the message track the error but keep going done with all the lines so handle the last header like feedparser but feed accepts bytes def feedself data super feeddata decode ascii surrogateescape c 2004 2006 python software foundation s baxter wouters and warsaw contact email sig python org feedparser an email feed parser the feed parser implements an interface for incrementally parsing an email message line by line this has advantages for certain applications such as those reading email messages off a socket feedparser feed is the primary interface for pushing new data into the parser it returns when there s nothing more it can do with the available data when you have no more data to push into the parser call close this completes the parsing and returns the root message object the other advantage of this parser is that it will never raise a parsing exception instead when it finds something unexpected it adds a defect to the current message defects are just instances that live on the message object s defects attribute rfc 2822 3 6 8 optional fields ftext is d33 57 d59 126 any character except controls sp and a file ish object that can have new data loaded into it you can also push and pop line matching predicates onto a stack when the current predicate matches the current line a false eof response i e empty string is returned instead this lets the parser adhere to a simple abstraction it parses until eof closes the current message text stream of the last partial line pushed into this object see issue 22233 for why this is a text stream and not a list a deque of full pushed lines the stack of false eof checking predicates a flag indicating whether the file has been closed or not don t forget any trailing partial line pop the line off the stack and see if it matches the current false eof predicate rfc 2046 section 5 1 2 requires us to recognize outer level boundaries at any level of inner nesting do this but be sure it s in the order of most to least nested we re at the false eof but push the last line back first let the consumer push a line back into the buffer push some new data into this object no new complete lines wait for more crack into lines preserving the linesep characters if the last element of the list does not end in a newline then treat it as a partial line we only check for n here because a line ending with r might be a line that was split in the middle of a r n sequence see bugs 1555570 and 1721862 a feed style parser of email _factory is called with no arguments to create a new message obj the policy keyword specifies a policy object that controls a number of aspects of the parser s operation the default policy maintains backward compatibility assume this is an old style factory non public interface for supporting parser s headersonly flag push more data into the parser parse all remaining data and return the root message object look for final set of defects create a new message and start by parsing headers collect the headers searching for a line that doesn t match the rfc 2822 header or continuation pattern including an empty line if we saw the rfc defined header body separator i e newline just throw it away otherwise the line is part of the body so push it back done with the headers so parse them and figure out what we re supposed to see in the body of the message headers only parsing is a backwards compatibility hack which was necessary in the older parser which could raise errors all remaining lines in the input are thrown into the message body message delivery status contains blocks of headers separated by a blank line we ll represent each header block as a separate nested message object but the processing is a bit different than standard message types because there is no body for the nested messages a blank line separates the subparts we need to pop the eof matcher in order to tell if we re at the end of the current file not the end of the last block of message headers the input stream must be sitting at the newline or at the eof we want to see if we re at the end of this subpart so first consume the blank line then test the next line to see if we re at this subpart s eof not at eof so this is a line we re going to need the message claims to be a message type then what follows is another rfc 2822 message the message claims to be a multipart but it has not defined a boundary that s a problem which we ll handle by reading everything until the eof and marking the message as defective make sure a valid content type was specified per rfc 2045 6 4 create a line match predicate which matches the inter part boundary as well as the end of multipart boundary don t push this onto the input stream until we ve scanned past the preamble if we re looking at the end boundary we re done with this multipart if there was a newline at the end of the closing boundary then we need to initialize the epilogue with the empty string see below we saw an inter part boundary were we in the preamble according to rfc 2046 the last newline belongs to the boundary we saw a boundary separating two parts consume any multiple boundary lines that may be following our interpretation of rfc 2046 bnf grammar does not produce body parts within such double boundaries recurse to parse this subpart the input stream points at the subpart s first line because of rfc 2046 the newline preceding the boundary separator actually belongs to the boundary not the previous subpart s payload or epilogue if the previous part is a multipart set the multipart up for newline cleansing which will happen if we re in a nested multipart i think we must be in the preamble we ve seen either the eof or the end boundary if we re still capturing the preamble we never saw the start boundary note that as a defect and store the captured text as the payload if we re not processing the preamble then we might have seen eof without seeing that end boundary that is also a defect everything from here to the eof is epilogue if the end boundary ended in a newline we ll need to make sure the epilogue isn t none any crlf at the front of the epilogue is not technically part of the epilogue also watch out for an empty string epilogue which means a single newline otherwise it s some non multipart type so the entire rest of the file contents becomes the payload passed a list of lines that make up the headers for the current msg check for continuation the first line of the headers was a continuation this is illegal so let s note the defect store the illegal line and ignore it for purposes of headers check for envelope header i e unix from strip off the trailing newline something looking like a unix from at the end it s probably the first line of the body so push back the line and stop weirdly placed unix from line note this as a defect and ignore it split the line on the colon separating field name from value there will always be a colon because if there wasn t the part of the parser that calls us would have started parsing the body if the colon is on the start of the line the header is clearly malformed but we might be able to salvage the rest of the message track the error but keep going done with all the lines so handle the last header like feedparser but feed accepts bytes
__all__ = ['FeedParser', 'BytesFeedParser'] import re from email import errors from email._policybase import compat32 from collections import deque from io import StringIO NLCRE = re.compile(r'\r\n|\r|\n') NLCRE_bol = re.compile(r'(\r\n|\r|\n)') NLCRE_eol = re.compile(r'(\r\n|\r|\n)\Z') NLCRE_crack = re.compile(r'(\r\n|\r|\n)') headerRE = re.compile(r'^(From |[\041-\071\073-\176]*:|[\t ])') EMPTYSTRING = '' NL = '\n' boundaryendRE = re.compile( r'(?P<end>--)?(?P<ws>[ \t]*)(?P<linesep>\r\n|\r|\n)?$') NeedMoreData = object() class BufferedSubFile(object): def __init__(self): self._partial = StringIO(newline='') self._lines = deque() self._eofstack = [] self._closed = False def push_eof_matcher(self, pred): self._eofstack.append(pred) def pop_eof_matcher(self): return self._eofstack.pop() def close(self): self._partial.seek(0) self.pushlines(self._partial.readlines()) self._partial.seek(0) self._partial.truncate() self._closed = True def readline(self): if not self._lines: if self._closed: return '' return NeedMoreData line = self._lines.popleft() for ateof in reversed(self._eofstack): if ateof(line): self._lines.appendleft(line) return '' return line def unreadline(self, line): assert line is not NeedMoreData self._lines.appendleft(line) def push(self, data): self._partial.write(data) if '\n' not in data and '\r' not in data: return self._partial.seek(0) parts = self._partial.readlines() self._partial.seek(0) self._partial.truncate() if not parts[-1].endswith('\n'): self._partial.write(parts.pop()) self.pushlines(parts) def pushlines(self, lines): self._lines.extend(lines) def __iter__(self): return self def __next__(self): line = self.readline() if line == '': raise StopIteration return line class FeedParser: def __init__(self, _factory=None, *, policy=compat32): self.policy = policy self._old_style_factory = False if _factory is None: if policy.message_factory is None: from email.message import Message self._factory = Message else: self._factory = policy.message_factory else: self._factory = _factory try: _factory(policy=self.policy) except TypeError: self._old_style_factory = True self._input = BufferedSubFile() self._msgstack = [] self._parse = self._parsegen().__next__ self._cur = None self._last = None self._headersonly = False def _set_headersonly(self): self._headersonly = True def feed(self, data): self._input.push(data) self._call_parse() def _call_parse(self): try: self._parse() except StopIteration: pass def close(self): self._input.close() self._call_parse() root = self._pop_message() assert not self._msgstack if root.get_content_maintype() == 'multipart' \ and not root.is_multipart() and not self._headersonly: defect = errors.MultipartInvariantViolationDefect() self.policy.handle_defect(root, defect) return root def _new_message(self): if self._old_style_factory: msg = self._factory() else: msg = self._factory(policy=self.policy) if self._cur and self._cur.get_content_type() == 'multipart/digest': msg.set_default_type('message/rfc822') if self._msgstack: self._msgstack[-1].attach(msg) self._msgstack.append(msg) self._cur = msg self._last = msg def _pop_message(self): retval = self._msgstack.pop() if self._msgstack: self._cur = self._msgstack[-1] else: self._cur = None return retval def _parsegen(self): self._new_message() headers = [] for line in self._input: if line is NeedMoreData: yield NeedMoreData continue if not headerRE.match(line): if not NLCRE.match(line): defect = errors.MissingHeaderBodySeparatorDefect() self.policy.handle_defect(self._cur, defect) self._input.unreadline(line) break headers.append(line) self._parse_headers(headers) if self._headersonly: lines = [] while True: line = self._input.readline() if line is NeedMoreData: yield NeedMoreData continue if line == '': break lines.append(line) self._cur.set_payload(EMPTYSTRING.join(lines)) return if self._cur.get_content_type() == 'message/delivery-status': while True: self._input.push_eof_matcher(NLCRE.match) for retval in self._parsegen(): if retval is NeedMoreData: yield NeedMoreData continue break self._pop_message() self._input.pop_eof_matcher() while True: line = self._input.readline() if line is NeedMoreData: yield NeedMoreData continue break while True: line = self._input.readline() if line is NeedMoreData: yield NeedMoreData continue break if line == '': break self._input.unreadline(line) return if self._cur.get_content_maintype() == 'message': for retval in self._parsegen(): if retval is NeedMoreData: yield NeedMoreData continue break self._pop_message() return if self._cur.get_content_maintype() == 'multipart': boundary = self._cur.get_boundary() if boundary is None: defect = errors.NoBoundaryInMultipartDefect() self.policy.handle_defect(self._cur, defect) lines = [] for line in self._input: if line is NeedMoreData: yield NeedMoreData continue lines.append(line) self._cur.set_payload(EMPTYSTRING.join(lines)) return if (str(self._cur.get('content-transfer-encoding', '8bit')).lower() not in ('7bit', '8bit', 'binary')): defect = errors.InvalidMultipartContentTransferEncodingDefect() self.policy.handle_defect(self._cur, defect) separator = '--' + boundary def boundarymatch(line): if not line.startswith(separator): return None return boundaryendRE.match(line, len(separator)) capturing_preamble = True preamble = [] linesep = False close_boundary_seen = False while True: line = self._input.readline() if line is NeedMoreData: yield NeedMoreData continue if line == '': break mo = boundarymatch(line) if mo: if mo.group('end'): close_boundary_seen = True linesep = mo.group('linesep') break if capturing_preamble: if preamble: lastline = preamble[-1] eolmo = NLCRE_eol.search(lastline) if eolmo: preamble[-1] = lastline[:-len(eolmo.group(0))] self._cur.preamble = EMPTYSTRING.join(preamble) capturing_preamble = False self._input.unreadline(line) continue while True: line = self._input.readline() if line is NeedMoreData: yield NeedMoreData continue mo = boundarymatch(line) if not mo: self._input.unreadline(line) break self._input.push_eof_matcher(boundarymatch) for retval in self._parsegen(): if retval is NeedMoreData: yield NeedMoreData continue break if self._last.get_content_maintype() == 'multipart': epilogue = self._last.epilogue if epilogue == '': self._last.epilogue = None elif epilogue is not None: mo = NLCRE_eol.search(epilogue) if mo: end = len(mo.group(0)) self._last.epilogue = epilogue[:-end] else: payload = self._last._payload if isinstance(payload, str): mo = NLCRE_eol.search(payload) if mo: payload = payload[:-len(mo.group(0))] self._last._payload = payload self._input.pop_eof_matcher() self._pop_message() self._last = self._cur else: assert capturing_preamble preamble.append(line) if capturing_preamble: defect = errors.StartBoundaryNotFoundDefect() self.policy.handle_defect(self._cur, defect) self._cur.set_payload(EMPTYSTRING.join(preamble)) epilogue = [] for line in self._input: if line is NeedMoreData: yield NeedMoreData continue self._cur.epilogue = EMPTYSTRING.join(epilogue) return if not close_boundary_seen: defect = errors.CloseBoundaryNotFoundDefect() self.policy.handle_defect(self._cur, defect) return if linesep: epilogue = [''] else: epilogue = [] for line in self._input: if line is NeedMoreData: yield NeedMoreData continue epilogue.append(line) if epilogue: firstline = epilogue[0] bolmo = NLCRE_bol.match(firstline) if bolmo: epilogue[0] = firstline[len(bolmo.group(0)):] self._cur.epilogue = EMPTYSTRING.join(epilogue) return lines = [] for line in self._input: if line is NeedMoreData: yield NeedMoreData continue lines.append(line) self._cur.set_payload(EMPTYSTRING.join(lines)) def _parse_headers(self, lines): lastheader = '' lastvalue = [] for lineno, line in enumerate(lines): if line[0] in ' \t': if not lastheader: defect = errors.FirstHeaderLineIsContinuationDefect(line) self.policy.handle_defect(self._cur, defect) continue lastvalue.append(line) continue if lastheader: self._cur.set_raw(*self.policy.header_source_parse(lastvalue)) lastheader, lastvalue = '', [] if line.startswith('From '): if lineno == 0: mo = NLCRE_eol.search(line) if mo: line = line[:-len(mo.group(0))] self._cur.set_unixfrom(line) continue elif lineno == len(lines) - 1: self._input.unreadline(line) return else: defect = errors.MisplacedEnvelopeHeaderDefect(line) self._cur.defects.append(defect) continue i = line.find(':') if i == 0: defect = errors.InvalidHeaderDefect("Missing header name.") self._cur.defects.append(defect) continue assert i>0, "_parse_headers fed line with no : and no leading WS" lastheader = line[:i] lastvalue = [line] if lastheader: self._cur.set_raw(*self.policy.header_source_parse(lastvalue)) class BytesFeedParser(FeedParser): def feed(self, data): super().feed(data.decode('ascii', 'surrogateescape'))
c 20012010 python software foundation barry warsaw contact emailsigpython org classes to generate plain text from a message object tree all generator decodedgenerator bytesgenerator import re import sys import time import random from copy import deepcopy from io import stringio bytesio from email utils import hassurrogates underscore nl n xxx no longer used by the code below nlcre re compiler rnrn fcre re compiler from re multiline class generator public interface def initself outfp manglefromnone maxheaderlennone policynone if manglefrom is none manglefrom true if policy is none else policy manglefrom self fp outfp self manglefrom manglefrom self maxheaderlen maxheaderlen self policy policy def writeself s just delegate to the file object self fp writes def flattenself msg unixfromfalse linesepnone rprint the message object tree rooted at msg to the output file specified when the generator instance was created unixfrom is a flag that forces the printing of a unix from delimiter before the first object in the message tree if the original message has no from delimiter a standard one is crafted by default this is false to inhibit the printing of any from delimiter note that for subobjects no from line is printed linesep specifies the characters used to indicate a new line in the output the default value is determined by the policy specified when the generator instance was created or if none was specified from the policy associated with the msg we use the xxx constants for operating on data that comes directly from the msg and encodedxxx constants for operating on data that has already been converted to bytes in the bytesgenerator and inserted into a temporary buffer because we use clone below when we recursively process message subparts and because clone uses the computed policy not none submessages will automatically get set to the computed policy when they are processed by this code clone this generator with the exact same options return self classfp self manglefrom none use policy setting which we ve adjusted policyself policy protected interface undocumented note that we use self write when what we are writing is coming from the source and self fp write when what we are writing is coming from a buffer because the bytes subclass has already had a chance to transform the data in its write method in that case this is an entirely pragmatic split determined by experiment we could be more general by always using write and having the bytes subclass write method detect when it has already transformed the input but since this whole thing is a hack anyway this seems good enough def newbufferself bytesgenerator overrides this to return bytesio return stringio def encodeself s bytesgenerator overrides this to encode strings to bytes return s def writelinesself lines we have to transform the line endings if not lines return lines nlcre splitlines for line in lines 1 self writeline self writeself nl if lines1 self writelines1 xxx logic tells me this else should be needed but the tests fail with it and pass without it nlcre split ends with a blank element if and only if there was a trailing newline else self writeself nl def writeself msg we can t write the headers yet because of the following scenario say a multipart message includes the boundary string somewhere in its body we d have to calculate the new boundary before we write the headers so that we can write the correct contenttype parameter the way we do this so as to make the handle methods simpler is to cache any subpart writes into a buffer then we write the headers and the buffer contents that way subpart handlers can do the right thing and can still modify the contenttype header if necessary oldfp self fp try self mungecte none self fp sfp self newbuffer self dispatchmsg finally self fp oldfp mungecte self mungecte del self mungecte if we munged the cte copy the message again and refix the cte if mungecte msg deepcopymsg preserve the header order if the cte header already exists if msg get contenttransferencoding is none msg contenttransferencoding mungecte0 else msg replaceheader contenttransferencoding mungecte0 msg replaceheader contenttype mungecte1 write the headers first we see if the message object wants to handle that itself if not we ll do it generically meth getattrmsg writeheaders none if meth is none self writeheadersmsg else methself self fp writesfp getvalue def dispatchself msg get the contenttype for the message then try to dispatch to self handlemaintypesubtype if there s no handler for the full mime type then dispatch to self handlemaintype if that s missing too then dispatch to self writebody main msg getcontentmaintype sub msg getcontentsubtype specific underscore joinmain sub replace meth getattrself handle specific none if meth is none generic main replace meth getattrself handle generic none if meth is none meth self writebody methmsg default handlers def writeheadersself msg for h v in msg rawitems self writeself policy foldh v a blank line always separates headers from body self writeself nl handlers for writing types and subtypes def handletextself msg payload msg getpayload if payload is none return if not isinstancepayload str raise typeerror string payload expected s typepayload if hassurrogatesmsg payload charset msg getparam charset if charset is not none xxx this copy stuff is an ugly hack to avoid modifying the existing message msg deepcopymsg del msg contenttransferencoding msg setpayloadpayload charset payload msg getpayload self mungecte msg contenttransferencoding msg contenttype if self manglefrom payload fcre sub from payload self writelinespayload default body handler writebody handletext def handlemultipartself msg the trick here is to write out each part separately merge them all together and then make sure that the boundary we ve chosen isn t present in the payload msgtexts subparts msg getpayload if subparts is none subparts elif isinstancesubparts str e g a nonstrict parse of a message with no starting boundary self writesubparts return elif not isinstancesubparts list scalar payload subparts subparts for part in subparts s self newbuffer g self clones g flattenpart unixfromfalse linesepself nl msgtexts appends getvalue baw what about boundaries that are wrapped in doublequotes boundary msg getboundary if not boundary create a boundary that doesn t appear in any of the message texts alltext self encodednl joinmsgtexts boundary self makeboundaryalltext msg setboundaryboundary if there s a preamble write it out with a trailing crlf if msg preamble is not none if self manglefrom preamble fcre sub from msg preamble else preamble msg preamble self writelinespreamble self writeself nl dashboundary transportpadding crlf self write boundary self nl bodypart if msgtexts self fp writemsgtexts pop0 encapsulation delimiter transportpadding crlf bodypart for bodypart in msgtexts delimiter transportpadding crlf self writeself nl boundary self nl bodypart self fp writebodypart closedelimiter transportpadding self writeself nl boundary self nl if msg epilogue is not none if self manglefrom epilogue fcre sub from msg epilogue else epilogue msg epilogue self writelinesepilogue def handlemultipartsignedself msg the contents of signed parts has to stay unmodified in order to keep the signature intact per rfc1847 2 1 so we disable header wrapping rdm this isn t enough to completely preserve the part but it helps p self policy self policy p clonemaxlinelength0 try self handlemultipartmsg finally self policy p def handlemessagedeliverystatusself msg we can t just write the headers directly to self s file object because this will leave an extra newline between the last header block and the boundary sigh blocks for part in msg getpayload s self newbuffer g self clones g flattenpart unixfromfalse linesepself nl text s getvalue lines text splitself encodednl strip off the unnecessary trailing empty line if lines and lines1 self encodedempty blocks appendself encodednl joinlines 1 else blocks appendtext now join all the blocks with an empty line this has the lovely effect of separating each block with an empty line but not adding an extra one after the last one self fp writeself encodednl joinblocks def handlemessageself msg s self newbuffer g self clones the payload of a messagerfc822 part should be a multipart sequence of length 1 the zeroth element of the list should be the message object for the subpart extract that object stringify it and write it out except it turns out when it s a string instead which happens when and only when headerparser is used on a message of mime type messagerfc822 such messages are generated by for example groupwise when forwarding unadorned messages issue 7970 so in that case we just emit the string body payload msg payload if isinstancepayload list g flattenmsg getpayload0 unixfromfalse linesepself nl payload s getvalue else payload self encodepayload self fp writepayload this used to be a module level function we use a classmethod for this and compilere so we can continue to provide the module level function for backward compatibility by doing makeboundary generator makeboundary at the end of the module it is internal so we could drop that classmethod def makeboundarycls textnone craft a random boundary if text is given ensure that the chosen boundary doesn t appear in the text token random randrangesys maxsize boundary 15 fmt token if text is none return boundary b boundary counter 0 while true cre cls compilere re escapeb re multiline if not cre searchtext break b boundary strcounter counter 1 return b classmethod def compilerecls s flags return re compiles flags class bytesgeneratorgenerator def writeself s self fp writes encode ascii surrogateescape def newbufferself return bytesio def encodeself s return s encode ascii def writeheadersself msg this is almost the same as the string version except for handling strings with 8bit bytes for h v in msg rawitems self fp writeself policy foldbinaryh v a blank line always separates headers from body self writeself nl def handletextself msg if the string has surrogates the original source was bytes so just write it back out if msg payload is none return if hassurrogatesmsg payload and not self policy ctetype 7bit if self manglefrom msg payload fcre subfrom msg payload self writelinesmsg payload else superbytesgenerator self handletextmsg default body handler writebody handletext classmethod def compilerecls s flags return re compiles encode ascii flags fmt nontext types part of message omitted filename filenames class decodedgeneratorgenerator def initself outfp manglefromnone maxheaderlennone fmtnone policynone generator initself outfp manglefrom maxheaderlen policypolicy if fmt is none self fmt fmt else self fmt fmt def dispatchself msg for part in msg walk maintype part getcontentmaintype if maintype text printpart getpayloaddecodefalse fileself elif maintype multipart just skip this pass else printself fmt type part getcontenttype maintype part getcontentmaintype subtype part getcontentsubtype filename part getfilename no filename description part get contentdescription no description encoding part get contenttransferencoding no encoding fileself helper used by generator makeboundary width lenreprsys maxsize1 fmt 0dd width backward compatibility makeboundary generator makeboundary c 2001 2010 python software foundation barry warsaw contact email sig python org classes to generate plain text from a message object tree xxx no longer used by the code below generates output from a message object tree this basic generator writes the message to the given file object as plain text public interface create the generator for message flattening outfp is the output file like object for writing the message to it must have a write method optional mangle_from_ is a flag that when true the default if policy is not set escapes from_ lines in the body of the message by putting a in front of them optional maxheaderlen specifies the longest length for a non continued header when a header line is longer in characters with tabs expanded to 8 spaces than maxheaderlen the header will split as defined in the header class set maxheaderlen to zero to disable header wrapping the default is 78 as recommended but not required by rfc 2822 the policy keyword specifies a policy object that controls a number of aspects of the generator s operation if no policy is specified the policy associated with the message object passed to the flatten method is used just delegate to the file object print the message object tree rooted at msg to the output file specified when the generator instance was created unixfrom is a flag that forces the printing of a unix from_ delimiter before the first object in the message tree if the original message has no from_ delimiter a standard one is crafted by default this is false to inhibit the printing of any from_ delimiter note that for subobjects no from_ line is printed linesep specifies the characters used to indicate a new line in the output the default value is determined by the policy specified when the generator instance was created or if none was specified from the policy associated with the msg we use the _xxx constants for operating on data that comes directly from the msg and _encoded_xxx constants for operating on data that has already been converted to bytes in the bytesgenerator and inserted into a temporary buffer because we use clone below when we recursively process message subparts and because clone uses the computed policy not none submessages will automatically get set to the computed policy when they are processed by this code clone this generator with the exact same options use policy setting which we ve adjusted protected interface undocumented note that we use self write when what we are writing is coming from the source and self _fp write when what we are writing is coming from a buffer because the bytes subclass has already had a chance to transform the data in its write method in that case this is an entirely pragmatic split determined by experiment we could be more general by always using write and having the bytes subclass write method detect when it has already transformed the input but since this whole thing is a hack anyway this seems good enough bytesgenerator overrides this to return bytesio bytesgenerator overrides this to encode strings to bytes we have to transform the line endings xxx logic tells me this else should be needed but the tests fail with it and pass without it nlcre split ends with a blank element if and only if there was a trailing newline else self write self _nl we can t write the headers yet because of the following scenario say a multipart message includes the boundary string somewhere in its body we d have to calculate the new boundary before we write the headers so that we can write the correct content type parameter the way we do this so as to make the _handle_ methods simpler is to cache any subpart writes into a buffer then we write the headers and the buffer contents that way subpart handlers can do the right thing and can still modify the content type header if necessary if we munged the cte copy the message again and re fix the cte preserve the header order if the cte header already exists write the headers first we see if the message object wants to handle that itself if not we ll do it generically get the content type for the message then try to dispatch to self _handle_ maintype _ subtype if there s no handler for the full mime type then dispatch to self _handle_ maintype if that s missing too then dispatch to self _writebody default handlers a blank line always separates headers from body handlers for writing types and subtypes xxx this copy stuff is an ugly hack to avoid modifying the existing message default body handler the trick here is to write out each part separately merge them all together and then make sure that the boundary we ve chosen isn t present in the payload e g a non strict parse of a message with no starting boundary scalar payload baw what about boundaries that are wrapped in double quotes create a boundary that doesn t appear in any of the message texts if there s a preamble write it out with a trailing crlf dash boundary transport padding crlf body part encapsulation delimiter transport padding crlf body part delimiter transport padding crlf body part close delimiter transport padding the contents of signed parts has to stay unmodified in order to keep the signature intact per rfc1847 2 1 so we disable header wrapping rdm this isn t enough to completely preserve the part but it helps we can t just write the headers directly to self s file object because this will leave an extra newline between the last header block and the boundary sigh strip off the unnecessary trailing empty line now join all the blocks with an empty line this has the lovely effect of separating each block with an empty line but not adding an extra one after the last one the payload of a message rfc822 part should be a multipart sequence of length 1 the zeroth element of the list should be the message object for the subpart extract that object stringify it and write it out except it turns out when it s a string instead which happens when and only when headerparser is used on a message of mime type message rfc822 such messages are generated by for example groupwise when forwarding unadorned messages issue 7970 so in that case we just emit the string body this used to be a module level function we use a classmethod for this and _compile_re so we can continue to provide the module level function for backward compatibility by doing _make_boundary generator _make_boundary at the end of the module it is internal so we could drop that craft a random boundary if text is given ensure that the chosen boundary doesn t appear in the text generates a bytes version of a message object tree functionally identical to the base generator except that the output is bytes and not string when surrogates were used in the input to encode bytes these are decoded back to bytes for output if the policy has cte_type set to 7bit then the message is transformed such that the non ascii bytes are properly content transfer encoded using the charset unknown 8bit the outfp object must accept bytes in its write method this is almost the same as the string version except for handling strings with 8bit bytes a blank line always separates headers from body if the string has surrogates the original source was bytes so just write it back out default body handler generates a text representation of a message like the generator base class except that non text parts are substituted with a format string representing the part like generator __init__ except that an additional optional argument is allowed walks through all subparts of a message if the subpart is of main type text then it prints the decoded payload of the subpart otherwise fmt is a format string that is used instead of the message payload fmt is expanded with the following keywords in keyword s format type full mime type of the non text part maintype main mime type of the non text part subtype sub mime type of the non text part filename filename of the non text part description description associated with the non text part encoding content transfer encoding of the non text part the default value for fmt is none meaning non text type s part of message omitted filename filename s just skip this helper used by generator _make_boundary backward compatibility
__all__ = ['Generator', 'DecodedGenerator', 'BytesGenerator'] import re import sys import time import random from copy import deepcopy from io import StringIO, BytesIO from email.utils import _has_surrogates UNDERSCORE = '_' NL = '\n' NLCRE = re.compile(r'\r\n|\r|\n') fcre = re.compile(r'^From ', re.MULTILINE) class Generator: def __init__(self, outfp, mangle_from_=None, maxheaderlen=None, *, policy=None): if mangle_from_ is None: mangle_from_ = True if policy is None else policy.mangle_from_ self._fp = outfp self._mangle_from_ = mangle_from_ self.maxheaderlen = maxheaderlen self.policy = policy def write(self, s): self._fp.write(s) def flatten(self, msg, unixfrom=False, linesep=None): r policy = msg.policy if self.policy is None else self.policy if linesep is not None: policy = policy.clone(linesep=linesep) if self.maxheaderlen is not None: policy = policy.clone(max_line_length=self.maxheaderlen) self._NL = policy.linesep self._encoded_NL = self._encode(self._NL) self._EMPTY = '' self._encoded_EMPTY = self._encode(self._EMPTY) old_gen_policy = self.policy old_msg_policy = msg.policy try: self.policy = policy msg.policy = policy if unixfrom: ufrom = msg.get_unixfrom() if not ufrom: ufrom = 'From nobody ' + time.ctime(time.time()) self.write(ufrom + self._NL) self._write(msg) finally: self.policy = old_gen_policy msg.policy = old_msg_policy def clone(self, fp): return self.__class__(fp, self._mangle_from_, None, policy=self.policy) def _new_buffer(self): return StringIO() def _encode(self, s): return s def _write_lines(self, lines): if not lines: return lines = NLCRE.split(lines) for line in lines[:-1]: self.write(line) self.write(self._NL) if lines[-1]: self.write(lines[-1]) def _write(self, msg): oldfp = self._fp try: self._munge_cte = None self._fp = sfp = self._new_buffer() self._dispatch(msg) finally: self._fp = oldfp munge_cte = self._munge_cte del self._munge_cte if munge_cte: msg = deepcopy(msg) if msg.get('content-transfer-encoding') is None: msg['Content-Transfer-Encoding'] = munge_cte[0] else: msg.replace_header('content-transfer-encoding', munge_cte[0]) msg.replace_header('content-type', munge_cte[1]) meth = getattr(msg, '_write_headers', None) if meth is None: self._write_headers(msg) else: meth(self) self._fp.write(sfp.getvalue()) def _dispatch(self, msg): main = msg.get_content_maintype() sub = msg.get_content_subtype() specific = UNDERSCORE.join((main, sub)).replace('-', '_') meth = getattr(self, '_handle_' + specific, None) if meth is None: generic = main.replace('-', '_') meth = getattr(self, '_handle_' + generic, None) if meth is None: meth = self._writeBody meth(msg) def _write_headers(self, msg): for h, v in msg.raw_items(): self.write(self.policy.fold(h, v)) self.write(self._NL) def _handle_text(self, msg): payload = msg.get_payload() if payload is None: return if not isinstance(payload, str): raise TypeError('string payload expected: %s' % type(payload)) if _has_surrogates(msg._payload): charset = msg.get_param('charset') if charset is not None: msg = deepcopy(msg) del msg['content-transfer-encoding'] msg.set_payload(payload, charset) payload = msg.get_payload() self._munge_cte = (msg['content-transfer-encoding'], msg['content-type']) if self._mangle_from_: payload = fcre.sub('>From ', payload) self._write_lines(payload) _writeBody = _handle_text def _handle_multipart(self, msg): msgtexts = [] subparts = msg.get_payload() if subparts is None: subparts = [] elif isinstance(subparts, str): self.write(subparts) return elif not isinstance(subparts, list): subparts = [subparts] for part in subparts: s = self._new_buffer() g = self.clone(s) g.flatten(part, unixfrom=False, linesep=self._NL) msgtexts.append(s.getvalue()) boundary = msg.get_boundary() if not boundary: alltext = self._encoded_NL.join(msgtexts) boundary = self._make_boundary(alltext) msg.set_boundary(boundary) if msg.preamble is not None: if self._mangle_from_: preamble = fcre.sub('>From ', msg.preamble) else: preamble = msg.preamble self._write_lines(preamble) self.write(self._NL) self.write('--' + boundary + self._NL) if msgtexts: self._fp.write(msgtexts.pop(0)) for body_part in msgtexts: self.write(self._NL + '--' + boundary + self._NL) self._fp.write(body_part) self.write(self._NL + '--' + boundary + '--' + self._NL) if msg.epilogue is not None: if self._mangle_from_: epilogue = fcre.sub('>From ', msg.epilogue) else: epilogue = msg.epilogue self._write_lines(epilogue) def _handle_multipart_signed(self, msg): p = self.policy self.policy = p.clone(max_line_length=0) try: self._handle_multipart(msg) finally: self.policy = p def _handle_message_delivery_status(self, msg): blocks = [] for part in msg.get_payload(): s = self._new_buffer() g = self.clone(s) g.flatten(part, unixfrom=False, linesep=self._NL) text = s.getvalue() lines = text.split(self._encoded_NL) if lines and lines[-1] == self._encoded_EMPTY: blocks.append(self._encoded_NL.join(lines[:-1])) else: blocks.append(text) self._fp.write(self._encoded_NL.join(blocks)) def _handle_message(self, msg): s = self._new_buffer() g = self.clone(s) payload = msg._payload if isinstance(payload, list): g.flatten(msg.get_payload(0), unixfrom=False, linesep=self._NL) payload = s.getvalue() else: payload = self._encode(payload) self._fp.write(payload) @classmethod def _make_boundary(cls, text=None): token = random.randrange(sys.maxsize) boundary = ('=' * 15) + (_fmt % token) + '==' if text is None: return boundary b = boundary counter = 0 while True: cre = cls._compile_re('^--' + re.escape(b) + '(--)?$', re.MULTILINE) if not cre.search(text): break b = boundary + '.' + str(counter) counter += 1 return b @classmethod def _compile_re(cls, s, flags): return re.compile(s, flags) class BytesGenerator(Generator): def write(self, s): self._fp.write(s.encode('ascii', 'surrogateescape')) def _new_buffer(self): return BytesIO() def _encode(self, s): return s.encode('ascii') def _write_headers(self, msg): for h, v in msg.raw_items(): self._fp.write(self.policy.fold_binary(h, v)) self.write(self._NL) def _handle_text(self, msg): if msg._payload is None: return if _has_surrogates(msg._payload) and not self.policy.cte_type=='7bit': if self._mangle_from_: msg._payload = fcre.sub(">From ", msg._payload) self._write_lines(msg._payload) else: super(BytesGenerator,self)._handle_text(msg) _writeBody = _handle_text @classmethod def _compile_re(cls, s, flags): return re.compile(s.encode('ascii'), flags) _FMT = '[Non-text (%(type)s) part of message omitted, filename %(filename)s]' class DecodedGenerator(Generator): def __init__(self, outfp, mangle_from_=None, maxheaderlen=None, fmt=None, *, policy=None): Generator.__init__(self, outfp, mangle_from_, maxheaderlen, policy=policy) if fmt is None: self._fmt = _FMT else: self._fmt = fmt def _dispatch(self, msg): for part in msg.walk(): maintype = part.get_content_maintype() if maintype == 'text': print(part.get_payload(decode=False), file=self) elif maintype == 'multipart': pass else: print(self._fmt % { 'type' : part.get_content_type(), 'maintype' : part.get_content_maintype(), 'subtype' : part.get_content_subtype(), 'filename' : part.get_filename('[no filename]'), 'description': part.get('Content-Description', '[no description]'), 'encoding' : part.get('Content-Transfer-Encoding', '[no encoding]'), }, file=self) _width = len(repr(sys.maxsize-1)) _fmt = '%%0%dd' % _width _make_boundary = Generator._make_boundary
c 20022007 python software foundation ben gertzfield barry warsaw contact emailsigpython org header encoding and decoding functionality all header decodeheader makeheader import re import binascii import email quoprimime import email base64mime from email errors import headerparseerror from email import charset as charset charset charset charset nl n space bspace b space8 8 emptystring maxlinelen 78 fws t usascii charset usascii utf8 charset utf8 match encodedword strings in the form charset q helloworld ecre re compiler literal pcharset nongreedy up to the next is the charset literal pencodingqqbb either a q or a b case insensitive literal pencoded nongreedy up to the next is the encoded string literal field name regexp including trailing colon but not separating whitespace according to rfc 2822 character range is from tilde to exclamation mark for use with match find a header embedded in a putative header value used to check for header injection attack helpers decode a message header value without converting charset returns a list of string charset pairs containing each of the decoded parts of the header charset is none for nonencoded parts of the header otherwise a lowercase string containing the name of the character set specified in the encoded string header may be a string that may or may not contain rfc2047 encoded words or it may be a header object an email errors headerparseerror may be raised when certain decoding error occurs e g a base64 decoding exception if it is a header object we can just return the encoded chunks if no encoding just return the header with no charset first step is to parse all the encoded parts into triplets of the form encodedstring encoding charset for unencoded strings the last two parts will be none now loop over words and remove words that consist of whitespace between two encoded strings the next step is to decode each encoded word by applying the reverse base64 or quopri transformation decodedwords is now a list of the form decodedword charset this is an unencoded word now convert all words to bytes and collapse consecutive runs of similarly encoded words create a header from a sequence of pairs as returned by decodeheader decodeheader takes a header value string and returns a sequence of pairs of the format decodedstring charset where charset is the string name of the character set this function takes one of those sequence of pairs and returns a header instance optional maxlinelen headername and continuationws are as in the header constructor none means usascii but we can simply pass it on to h append create a mimecompliant header that can contain many character sets optional s is the initial header value if none the initial header value is not set you can later append to the header with append method calls s may be a byte string or a unicode string but see the append documentation for semantics optional charset serves two purposes it has the same meaning as the charset argument to the append method it also sets the default character set for all subsequent append calls that omit the charset argument if charset is not provided in the constructor the usascii charset is used both as s s initial charset and as the default for subsequent append calls the maximum line length can be specified explicitly via maxlinelen for splitting the first line to a shorter value to account for the field header which isn t included in s e g subject pass in the name of the field in headername the default maxlinelen is 78 as recommended by rfc 2822 continuationws must be rfc 2822 compliant folding whitespace usually either a space or a hard tab which will be prepended to continuation lines errors is passed through to the append call take the separating colon and space into account return the string value of the header self normalize uchunks lastcs none lastspace none for string charset in self chunks we must preserve spaces between encoded and nonencoded word boundaries which means for us we need to add a space when we go from a charset to noneusascii or from noneusascii to a charset only do this for the second and subsequent chunks don t add a space if the noneusascii string already has a space trailing or leading depending on transition nextcs charset if nextcs charset unknown8bit originalbytes string encode ascii surrogateescape string originalbytes decode ascii replace if uchunks hasspace string and self nonctextstring0 if lastcs not in none usascii if nextcs in none usascii and not hasspace uchunks appendspace nextcs none elif nextcs not in none usascii and not lastspace uchunks appendspace lastspace string and self nonctextstring1 lastcs nextcs uchunks appendstring return emptystring joinuchunks rich comparison operators for equality only baw does it make sense to have or explicitly disable operators def eqself other other may be a header or a string both are fine so coerce ourselves to a unicode of the unencoded header value swap the args and do another comparison return other strself def appendself s charsetnone errors strict if charset is none charset self charset elif not isinstancecharset charset charset charsetcharset if not isinstances str inputcharset charset inputcodec or usascii if inputcharset charset unknown8bit s s decode usascii surrogateescape else s s decodeinputcharset errors ensure that the bytes we re storing can be decoded to the output character set otherwise an early error is raised outputcharset charset outputcodec or usascii if outputcharset charset unknown8bit try s encodeoutputcharset errors except unicodeencodeerror if outputcharset usascii raise charset utf8 self chunks appends charset def nonctextself s return s isspace or s in def encodeself splitchars t maxlinelennone linesep n rencode a message header into an rfccompliant format there are many issues involved in converting a given string for use in an email header only certain character sets are readable in most email clients and as header strings can only contain a subset of 7bit ascii care must be taken to properly convert and encode with base64 or quotedprintable header strings in addition there is a 75character length limit on any given encoded header field so linewrapping must be performed even with doublebyte character sets optional maxlinelen specifies the maximum length of each generated line exclusive of the linesep string individual lines may be longer than maxlinelen if a folding point cannot be found the first line will be shorter by the length of the header name plus if a header name was specified at header construction time the default value for maxlinelen is determined at header construction time optional splitchars is a string containing characters which should be given extra weight by the splitting algorithm during normal header wrapping this is in very rough support of rfc 2822 s higher level syntactic breaks split points preceded by a splitchar are preferred during line splitting with the characters preferred in the order in which they appear in the string space and tab may be included in the string to indicate whether preference should be given to one over the other as a split point when other split chars do not appear in the line being split splitchars does not affect rfc 2047 encoded lines optional linesep is a string to be used to separate the lines of the value the default value is the most useful for typical python applications but it can be set to rn to produce rfccompliant line separators when needed a maxlinelen of 0 means don t wrap for all practical purposes choosing a huge number here accomplishes that and makes the valueformatter algorithm much simpler step 1 normalize the chunks so that all runs of identical charsets get collapsed into a single unicode string if the charset has no header encoding i e it is an ascii encoding then we must split the header at the highest level syntactic break possible note that we don t have a lot of smarts about field syntax we just try to break on semicolons then commas then whitespace eventually this should be pluggable otherwise we re doing either a base64 or a quotedprintable encoding which means we don t need to split the line on syntactic breaks we can basically just find enough characters to fit on the current line minus the rfc 2047 chrome what makes this trickier though is that we have to split at octet boundaries not character boundaries but it s only safe to split at character boundaries so at best we can only get close the first element extends the current line but if it s none then nothing more fit on the current line so start a new line there are no encoded lines so we re done there was only one line everything else are full lines in themselves the first line s length the rfc 2822 header folding algorithm is simple in principle but complex in practice lines may be folded any place where folding white space appears by inserting a linesep character in front of the fws the complication is that not all spaces or tabs qualify as fws and we are also supposed to prefer to break at higher level syntactic breaks we can t do either of these without intimate knowledge of the structure of structured headers which we don t have here so the best we can do here is prefer to break at the specified splitchars and hope that we don t choose any spaces or tabs that aren t legal fws this is at least better than the old algorithm where we would sometimes introduce fws after a splitchar or the algorithm before that where we would turn all white space runs into single spaces or tabs find the best split point working backward from the end there might be none on a long first line there will be a header so leave it on a line by itself we don t use continuationws here because the whitespace after a header should always be a space c 2002 2007 python software foundation ben gertzfield barry warsaw contact email sig python org header encoding and decoding functionality match encoded word strings in the form charset q hello_world literal p charset non greedy up to the next is the charset literal p encoding qqbb either a q or a b case insensitive literal p encoded non greedy up to the next is the encoded string literal field name regexp including trailing colon but not separating whitespace according to rfc 2822 character range is from tilde to exclamation mark for use with match find a header embedded in a putative header value used to check for header injection attack helpers decode a message header value without converting charset returns a list of string charset pairs containing each of the decoded parts of the header charset is none for non encoded parts of the header otherwise a lower case string containing the name of the character set specified in the encoded string header may be a string that may or may not contain rfc2047 encoded words or it may be a header object an email errors headerparseerror may be raised when certain decoding error occurs e g a base64 decoding exception if it is a header object we can just return the encoded chunks if no encoding just return the header with no charset first step is to parse all the encoded parts into triplets of the form encoded_string encoding charset for unencoded strings the last two parts will be none now loop over words and remove words that consist of whitespace between two encoded strings the next step is to decode each encoded word by applying the reverse base64 or quopri transformation decoded_words is now a list of the form decoded_word charset this is an unencoded word postel s law add missing padding now convert all words to bytes and collapse consecutive runs of similarly encoded words create a header from a sequence of pairs as returned by decode_header decode_header takes a header value string and returns a sequence of pairs of the format decoded_string charset where charset is the string name of the character set this function takes one of those sequence of pairs and returns a header instance optional maxlinelen header_name and continuation_ws are as in the header constructor none means us ascii but we can simply pass it on to h append create a mime compliant header that can contain many character sets optional s is the initial header value if none the initial header value is not set you can later append to the header with append method calls s may be a byte string or a unicode string but see the append documentation for semantics optional charset serves two purposes it has the same meaning as the charset argument to the append method it also sets the default character set for all subsequent append calls that omit the charset argument if charset is not provided in the constructor the us ascii charset is used both as s s initial charset and as the default for subsequent append calls the maximum line length can be specified explicitly via maxlinelen for splitting the first line to a shorter value to account for the field header which isn t included in s e g subject pass in the name of the field in header_name the default maxlinelen is 78 as recommended by rfc 2822 continuation_ws must be rfc 2822 compliant folding whitespace usually either a space or a hard tab which will be prepended to continuation lines errors is passed through to the append call take the separating colon and space into account return the string value of the header we must preserve spaces between encoded and non encoded word boundaries which means for us we need to add a space when we go from a charset to none us ascii or from none us ascii to a charset only do this for the second and subsequent chunks don t add a space if the none us ascii string already has a space trailing or leading depending on transition rich comparison operators for equality only baw does it make sense to have or explicitly disable operators other may be a header or a string both are fine so coerce ourselves to a unicode of the unencoded header value swap the args and do another comparison append a string to the mime header optional charset if given should be a charset instance or the name of a character set which will be converted to a charset instance a value of none the default means that the charset given in the constructor is used s may be a byte string or a unicode string if it is a byte string i e isinstance s str is false then charset is the encoding of that byte string and a unicodeerror will be raised if the string cannot be decoded with that charset if s is a unicode string then charset is a hint specifying the character set of the characters in the string in either case when producing an rfc 2822 compliant header using rfc 2047 rules the string will be encoded using the output codec of the charset if the string cannot be encoded to the output codec a unicodeerror will be raised optional errors is passed as the errors argument to the decode call if s is a byte string ensure that the bytes we re storing can be decoded to the output character set otherwise an early error is raised true if string s is not a ctext character of rfc822 encode a message header into an rfc compliant format there are many issues involved in converting a given string for use in an email header only certain character sets are readable in most email clients and as header strings can only contain a subset of 7 bit ascii care must be taken to properly convert and encode with base64 or quoted printable header strings in addition there is a 75 character length limit on any given encoded header field so line wrapping must be performed even with double byte character sets optional maxlinelen specifies the maximum length of each generated line exclusive of the linesep string individual lines may be longer than maxlinelen if a folding point cannot be found the first line will be shorter by the length of the header name plus if a header name was specified at header construction time the default value for maxlinelen is determined at header construction time optional splitchars is a string containing characters which should be given extra weight by the splitting algorithm during normal header wrapping this is in very rough support of rfc 2822 s higher level syntactic breaks split points preceded by a splitchar are preferred during line splitting with the characters preferred in the order in which they appear in the string space and tab may be included in the string to indicate whether preference should be given to one over the other as a split point when other split chars do not appear in the line being split splitchars does not affect rfc 2047 encoded lines optional linesep is a string to be used to separate the lines of the value the default value is the most useful for typical python applications but it can be set to r n to produce rfc compliant line separators when needed a maxlinelen of 0 means don t wrap for all practical purposes choosing a huge number here accomplishes that and makes the _valueformatter algorithm much simpler step 1 normalize the chunks so that all runs of identical charsets get collapsed into a single unicode string if the charset has no header encoding i e it is an ascii encoding then we must split the header at the highest level syntactic break possible note that we don t have a lot of smarts about field syntax we just try to break on semi colons then commas then whitespace eventually this should be pluggable otherwise we re doing either a base64 or a quoted printable encoding which means we don t need to split the line on syntactic breaks we can basically just find enough characters to fit on the current line minus the rfc 2047 chrome what makes this trickier though is that we have to split at octet boundaries not character boundaries but it s only safe to split at character boundaries so at best we can only get close the first element extends the current line but if it s none then nothing more fit on the current line so start a new line there are no encoded lines so we re done there was only one line everything else are full lines in themselves the first line s length the rfc 2822 header folding algorithm is simple in principle but complex in practice lines may be folded any place where folding white space appears by inserting a linesep character in front of the fws the complication is that not all spaces or tabs qualify as fws and we are also supposed to prefer to break at higher level syntactic breaks we can t do either of these without intimate knowledge of the structure of structured headers which we don t have here so the best we can do here is prefer to break at the specified splitchars and hope that we don t choose any spaces or tabs that aren t legal fws this is at least better than the old algorithm where we would sometimes introduce fws after a splitchar or the algorithm before that where we would turn all white space runs into single spaces or tabs find the best split point working backward from the end there might be none on a long first line there will be a header so leave it on a line by itself we don t use continuation_ws here because the whitespace after a header should always be a space
__all__ = [ 'Header', 'decode_header', 'make_header', ] import re import binascii import email.quoprimime import email.base64mime from email.errors import HeaderParseError from email import charset as _charset Charset = _charset.Charset NL = '\n' SPACE = ' ' BSPACE = b' ' SPACE8 = ' ' * 8 EMPTYSTRING = '' MAXLINELEN = 78 FWS = ' \t' USASCII = Charset('us-ascii') UTF8 = Charset('utf-8') ecre = re.compile(r, re.VERBOSE | re.MULTILINE) fcre = re.compile(r'[\041-\176]+:$') _embedded_header = re.compile(r'\n[^ \t]+:') _max_append = email.quoprimime._max_append def decode_header(header): if hasattr(header, '_chunks'): return [(_charset._encode(string, str(charset)), str(charset)) for string, charset in header._chunks] if not ecre.search(header): return [(header, None)] words = [] for line in header.splitlines(): parts = ecre.split(line) first = True while parts: unencoded = parts.pop(0) if first: unencoded = unencoded.lstrip() first = False if unencoded: words.append((unencoded, None, None)) if parts: charset = parts.pop(0).lower() encoding = parts.pop(0).lower() encoded = parts.pop(0) words.append((encoded, encoding, charset)) droplist = [] for n, w in enumerate(words): if n>1 and w[1] and words[n-2][1] and words[n-1][0].isspace(): droplist.append(n-1) for d in reversed(droplist): del words[d] decoded_words = [] for encoded_string, encoding, charset in words: if encoding is None: decoded_words.append((encoded_string, charset)) elif encoding == 'q': word = email.quoprimime.header_decode(encoded_string) decoded_words.append((word, charset)) elif encoding == 'b': paderr = len(encoded_string) % 4 if paderr: encoded_string += '==='[:4 - paderr] try: word = email.base64mime.decode(encoded_string) except binascii.Error: raise HeaderParseError('Base64 decoding error') else: decoded_words.append((word, charset)) else: raise AssertionError('Unexpected encoding: ' + encoding) collapsed = [] last_word = last_charset = None for word, charset in decoded_words: if isinstance(word, str): word = bytes(word, 'raw-unicode-escape') if last_word is None: last_word = word last_charset = charset elif charset != last_charset: collapsed.append((last_word, last_charset)) last_word = word last_charset = charset elif last_charset is None: last_word += BSPACE + word else: last_word += word collapsed.append((last_word, last_charset)) return collapsed def make_header(decoded_seq, maxlinelen=None, header_name=None, continuation_ws=' '): h = Header(maxlinelen=maxlinelen, header_name=header_name, continuation_ws=continuation_ws) for s, charset in decoded_seq: if charset is not None and not isinstance(charset, Charset): charset = Charset(charset) h.append(s, charset) return h class Header: def __init__(self, s=None, charset=None, maxlinelen=None, header_name=None, continuation_ws=' ', errors='strict'): if charset is None: charset = USASCII elif not isinstance(charset, Charset): charset = Charset(charset) self._charset = charset self._continuation_ws = continuation_ws self._chunks = [] if s is not None: self.append(s, charset, errors) if maxlinelen is None: maxlinelen = MAXLINELEN self._maxlinelen = maxlinelen if header_name is None: self._headerlen = 0 else: self._headerlen = len(header_name) + 2 def __str__(self): self._normalize() uchunks = [] lastcs = None lastspace = None for string, charset in self._chunks: nextcs = charset if nextcs == _charset.UNKNOWN8BIT: original_bytes = string.encode('ascii', 'surrogateescape') string = original_bytes.decode('ascii', 'replace') if uchunks: hasspace = string and self._nonctext(string[0]) if lastcs not in (None, 'us-ascii'): if nextcs in (None, 'us-ascii') and not hasspace: uchunks.append(SPACE) nextcs = None elif nextcs not in (None, 'us-ascii') and not lastspace: uchunks.append(SPACE) lastspace = string and self._nonctext(string[-1]) lastcs = nextcs uchunks.append(string) return EMPTYSTRING.join(uchunks) def __eq__(self, other): return other == str(self) def append(self, s, charset=None, errors='strict'): if charset is None: charset = self._charset elif not isinstance(charset, Charset): charset = Charset(charset) if not isinstance(s, str): input_charset = charset.input_codec or 'us-ascii' if input_charset == _charset.UNKNOWN8BIT: s = s.decode('us-ascii', 'surrogateescape') else: s = s.decode(input_charset, errors) output_charset = charset.output_codec or 'us-ascii' if output_charset != _charset.UNKNOWN8BIT: try: s.encode(output_charset, errors) except UnicodeEncodeError: if output_charset!='us-ascii': raise charset = UTF8 self._chunks.append((s, charset)) def _nonctext(self, s): return s.isspace() or s in ('(', ')', '\\') def encode(self, splitchars=';, \t', maxlinelen=None, linesep='\n'): r self._normalize() if maxlinelen is None: maxlinelen = self._maxlinelen if maxlinelen == 0: maxlinelen = 1000000 formatter = _ValueFormatter(self._headerlen, maxlinelen, self._continuation_ws, splitchars) lastcs = None hasspace = lastspace = None for string, charset in self._chunks: if hasspace is not None: hasspace = string and self._nonctext(string[0]) if lastcs not in (None, 'us-ascii'): if not hasspace or charset not in (None, 'us-ascii'): formatter.add_transition() elif charset not in (None, 'us-ascii') and not lastspace: formatter.add_transition() lastspace = string and self._nonctext(string[-1]) lastcs = charset hasspace = False lines = string.splitlines() if lines: formatter.feed('', lines[0], charset) else: formatter.feed('', '', charset) for line in lines[1:]: formatter.newline() if charset.header_encoding is not None: formatter.feed(self._continuation_ws, ' ' + line.lstrip(), charset) else: sline = line.lstrip() fws = line[:len(line)-len(sline)] formatter.feed(fws, sline, charset) if len(lines) > 1: formatter.newline() if self._chunks: formatter.add_transition() value = formatter._str(linesep) if _embedded_header.search(value): raise HeaderParseError("header value appears to contain " "an embedded header: {!r}".format(value)) return value def _normalize(self): chunks = [] last_charset = None last_chunk = [] for string, charset in self._chunks: if charset == last_charset: last_chunk.append(string) else: if last_charset is not None: chunks.append((SPACE.join(last_chunk), last_charset)) last_chunk = [string] last_charset = charset if last_chunk: chunks.append((SPACE.join(last_chunk), last_charset)) self._chunks = chunks class _ValueFormatter: def __init__(self, headerlen, maxlen, continuation_ws, splitchars): self._maxlen = maxlen self._continuation_ws = continuation_ws self._continuation_ws_len = len(continuation_ws) self._splitchars = splitchars self._lines = [] self._current_line = _Accumulator(headerlen) def _str(self, linesep): self.newline() return linesep.join(self._lines) def __str__(self): return self._str(NL) def newline(self): end_of_line = self._current_line.pop() if end_of_line != (' ', ''): self._current_line.push(*end_of_line) if len(self._current_line) > 0: if self._current_line.is_onlyws() and self._lines: self._lines[-1] += str(self._current_line) else: self._lines.append(str(self._current_line)) self._current_line.reset() def add_transition(self): self._current_line.push(' ', '') def feed(self, fws, string, charset): if charset.header_encoding is None: self._ascii_split(fws, string, self._splitchars) return encoded_lines = charset.header_encode_lines(string, self._maxlengths()) try: first_line = encoded_lines.pop(0) except IndexError: return if first_line is not None: self._append_chunk(fws, first_line) try: last_line = encoded_lines.pop() except IndexError: return self.newline() self._current_line.push(self._continuation_ws, last_line) for line in encoded_lines: self._lines.append(self._continuation_ws + line) def _maxlengths(self): yield self._maxlen - len(self._current_line) while True: yield self._maxlen - self._continuation_ws_len def _ascii_split(self, fws, string, splitchars): parts = re.split("(["+FWS+"]+)", fws+string) if parts[0]: parts[:0] = [''] else: parts.pop(0) for fws, part in zip(*[iter(parts)]*2): self._append_chunk(fws, part) def _append_chunk(self, fws, string): self._current_line.push(fws, string) if len(self._current_line) > self._maxlen: for ch in self._splitchars: for i in range(self._current_line.part_count()-1, 0, -1): if ch.isspace(): fws = self._current_line[i][0] if fws and fws[0]==ch: break prevpart = self._current_line[i-1][1] if prevpart and prevpart[-1]==ch: break else: continue break else: fws, part = self._current_line.pop() if self._current_line._initial_size > 0: self.newline() if not fws: fws = ' ' self._current_line.push(fws, part) return remainder = self._current_line.pop_from(i) self._lines.append(str(self._current_line)) self._current_line.reset(remainder) class _Accumulator(list): def __init__(self, initial_size=0): self._initial_size = initial_size super().__init__() def push(self, fws, string): self.append((fws, string)) def pop_from(self, i=0): popped = self[i:] self[i:] = [] return popped def pop(self): if self.part_count()==0: return ('', '') return super().pop() def __len__(self): return sum((len(fws)+len(part) for fws, part in self), self._initial_size) def __str__(self): return EMPTYSTRING.join((EMPTYSTRING.join((fws, part)) for fws, part in self)) def reset(self, startval=None): if startval is None: startval = [] self[:] = startval self._initial_size = 0 def is_onlyws(self): return self._initial_size==0 and (not self or str(self).isspace()) def part_count(self): return super().__len__()
representing and manipulating email headers via custom objects this module provides an implementation of the headerregistry api the implementation is designed to flexibly follow rfc5322 rules create an object representing a full email address an address can have a displayname a username and a domain in addition to specifying the username and domain separately they may be specified together by using the addrspec keyword instead of the username and domain keywords if an addrspec string is specified it must be properly quoted according to rfc 5322 rules an error will be raised if it is not an address object has displayname username domain and addrspec attributes all of which are readonly the addrspec and the string value of the object are both quoted according to rfc5322 rules but without any content transfer encoding this clause with its potential raise may only happen when an application program creates an address object using an addrspec keyword the email library code itself must always supply username and domain the addrspec usernamedomain portion of the address quoted according to rfc 5322 rules but with no content transfer encoding create an object representing an address group an address group consists of a displayname followed by colon and a list of addresses see address terminated by a semicolon the group is specifying a displayname and a possibly empty list of address objects a group can also be used to represent a single address that is not in a group which is convenient when manipulating lists that are a combination of groups and individual addresses in this case the displayname should be set to none in particular the string representation of a group whose displayname is none is the same as the address object if there is one and only one address object in the addresses list header classes base class for message headers implements generic behavior and provides tools for subclasses a subclass must define a classmethod named parse that takes an unfolded value string and a dictionary as its arguments the dictionary will contain one key defects initialized to an empty list after the call the dictionary must contain two additional keys parsetree set to the parse tree obtained from parsing the header and decoded set to the string value of the idealized representation of the data from the value that is encoded words are decoded and values that have canonical representations are so represented the defects key is intended to collect parsing defects which the message parser will subsequently dispose of as appropriate the parser should not insofar as practical raise any errors defects should be added to the list instead the standard header parsers register defects for rfc compliance issues for obsolete rfc syntax and for unrecoverable parsing errors the parse method may add additional keys to the dictionary in this case the subclass must define an init method which will be passed the dictionary as its keyword arguments the method should use usually by setting them as the value of similarly named attributes and remove all the extra keys added by its parse method and then use super to call its parent class with the remaining arguments and keywords the subclass should also make sure that a maxcount attribute is defined that is either none or 1 xxx need to better define this api fold header according to policy the parsed representation of the header is folded according to rfc5322 rules as modified by the policy if the parse tree contains surrogateescaped bytes the bytes are cte encoded using the charset unknown8bit any nonascii characters in the parse tree are cte encoded using charset utf8 xxx make this a policy setting the returned value is an asciionly string possibly containing linesep characters and ending with a linesep character the string includes the header name and the separator at some point we need to put fws here if it was in the source header whose value consists of a single timestamp provides an additional attribute datetime which is either an aware datetime using a timezone or a naive datetime if the timezone in the input string is 0000 also accepts a datetime as input the value attribute is the normalized form of the timestamp which means it is the output of formatdatetime on the datetime this is used only for folding not for creating decoded we are translating here from the rfc language addressmailbox to our api language groupaddress assume it is addressgroup stuff mixin that handles the params dict must be subclassed and a property valueparser for the specific header provided the mime rfcs specify that parameter ordering is arbitrary the header factory a headerfactory and header registry def initself baseclassbaseheader defaultclassunstructuredheader usedefaultmaptrue self registry self baseclass baseclass self defaultclass defaultclass if usedefaultmap self registry updatedefaultheadermap def maptotypeself name cls self registryname lower cls def getitemself name cls self registry getname lower self defaultclass return type cls name cls self baseclass def callself name value return selfnamename value create an object representing a full email address an address can have a display_name a username and a domain in addition to specifying the username and domain separately they may be specified together by using the addr_spec keyword instead of the username and domain keywords if an addr_spec string is specified it must be properly quoted according to rfc 5322 rules an error will be raised if it is not an address object has display_name username domain and addr_spec attributes all of which are read only the addr_spec and the string value of the object are both quoted according to rfc5322 rules but without any content transfer encoding this clause with its potential raise may only happen when an application program creates an address object using an addr_spec keyword the email library code itself must always supply username and domain the addr_spec username domain portion of the address quoted according to rfc 5322 rules but with no content transfer encoding create an object representing an address group an address group consists of a display_name followed by colon and a list of addresses see address terminated by a semi colon the group is specifying a display_name and a possibly empty list of address objects a group can also be used to represent a single address that is not in a group which is convenient when manipulating lists that are a combination of groups and individual addresses in this case the display_name should be set to none in particular the string representation of a group whose display_name is none is the same as the address object if there is one and only one address object in the addresses list header classes base class for message headers implements generic behavior and provides tools for subclasses a subclass must define a classmethod named parse that takes an unfolded value string and a dictionary as its arguments the dictionary will contain one key defects initialized to an empty list after the call the dictionary must contain two additional keys parse_tree set to the parse tree obtained from parsing the header and decoded set to the string value of the idealized representation of the data from the value that is encoded words are decoded and values that have canonical representations are so represented the defects key is intended to collect parsing defects which the message parser will subsequently dispose of as appropriate the parser should not insofar as practical raise any errors defects should be added to the list instead the standard header parsers register defects for rfc compliance issues for obsolete rfc syntax and for unrecoverable parsing errors the parse method may add additional keys to the dictionary in this case the subclass must define an init method which will be passed the dictionary as its keyword arguments the method should use usually by setting them as the value of similarly named attributes and remove all the extra keys added by its parse method and then use super to call its parent class with the remaining arguments and keywords the subclass should also make sure that a max_count attribute is defined that is either none or 1 xxx need to better define this api fold header according to policy the parsed representation of the header is folded according to rfc5322 rules as modified by the policy if the parse tree contains surrogateescaped bytes the bytes are cte encoded using the charset unknown 8bit any non ascii characters in the parse tree are cte encoded using charset utf 8 xxx make this a policy setting the returned value is an ascii only string possibly containing linesep characters and ending with a linesep character the string includes the header name and the separator at some point we need to put fws here if it was in the source header whose value consists of a single timestamp provides an additional attribute datetime which is either an aware datetime using a timezone or a naive datetime if the timezone in the input string is 0000 also accepts a datetime as input the value attribute is the normalized form of the timestamp which means it is the output of format_datetime on the datetime this is used only for folding not for creating decoded we are translating here from the rfc language address mailbox to our api language group address assume it is address group stuff mixin that handles the params dict must be subclassed and a property value_parser for the specific header provided the mime rfcs specify that parameter ordering is arbitrary the header factory a header_factory and header registry create a header_factory that works with the policy api base_class is the class that will be the last class in the created header class s __bases__ list default_class is the class that will be used if name see __call__ does not appear in the registry use_default_map controls whether or not the default mapping of names to specialized classes is copied in to the registry when the factory is created the default is true register cls as the specialized class for handling name headers create a header instance for header name from value creates a header instance by creating a specialized class for parsing and representing the specified header by combining the factory base_class with a specialized class from the registry or the default_class and passing the name and value to the constructed class s constructor
from types import MappingProxyType from email import utils from email import errors from email import _header_value_parser as parser class Address: def __init__(self, display_name='', username='', domain='', addr_spec=None): inputs = ''.join(filter(None, (display_name, username, domain, addr_spec))) if '\r' in inputs or '\n' in inputs: raise ValueError("invalid arguments; address parts cannot contain CR or LF") if addr_spec is not None: if username or domain: raise TypeError("addrspec specified when username and/or " "domain also specified") a_s, rest = parser.get_addr_spec(addr_spec) if rest: raise ValueError("Invalid addr_spec; only '{}' " "could be parsed from '{}'".format( a_s, addr_spec)) if a_s.all_defects: raise a_s.all_defects[0] username = a_s.local_part domain = a_s.domain self._display_name = display_name self._username = username self._domain = domain @property def display_name(self): return self._display_name @property def username(self): return self._username @property def domain(self): return self._domain @property def addr_spec(self): lp = self.username if not parser.DOT_ATOM_ENDS.isdisjoint(lp): lp = parser.quote_string(lp) if self.domain: return lp + '@' + self.domain if not lp: return '<>' return lp def __repr__(self): return "{}(display_name={!r}, username={!r}, domain={!r})".format( self.__class__.__name__, self.display_name, self.username, self.domain) def __str__(self): disp = self.display_name if not parser.SPECIALS.isdisjoint(disp): disp = parser.quote_string(disp) if disp: addr_spec = '' if self.addr_spec=='<>' else self.addr_spec return "{} <{}>".format(disp, addr_spec) return self.addr_spec def __eq__(self, other): if not isinstance(other, Address): return NotImplemented return (self.display_name == other.display_name and self.username == other.username and self.domain == other.domain) class Group: def __init__(self, display_name=None, addresses=None): self._display_name = display_name self._addresses = tuple(addresses) if addresses else tuple() @property def display_name(self): return self._display_name @property def addresses(self): return self._addresses def __repr__(self): return "{}(display_name={!r}, addresses={!r}".format( self.__class__.__name__, self.display_name, self.addresses) def __str__(self): if self.display_name is None and len(self.addresses)==1: return str(self.addresses[0]) disp = self.display_name if disp is not None and not parser.SPECIALS.isdisjoint(disp): disp = parser.quote_string(disp) adrstr = ", ".join(str(x) for x in self.addresses) adrstr = ' ' + adrstr if adrstr else adrstr return "{}:{};".format(disp, adrstr) def __eq__(self, other): if not isinstance(other, Group): return NotImplemented return (self.display_name == other.display_name and self.addresses == other.addresses) class BaseHeader(str): def __new__(cls, name, value): kwds = {'defects': []} cls.parse(value, kwds) if utils._has_surrogates(kwds['decoded']): kwds['decoded'] = utils._sanitize(kwds['decoded']) self = str.__new__(cls, kwds['decoded']) del kwds['decoded'] self.init(name, **kwds) return self def init(self, name, *, parse_tree, defects): self._name = name self._parse_tree = parse_tree self._defects = defects @property def name(self): return self._name @property def defects(self): return tuple(self._defects) def __reduce__(self): return ( _reconstruct_header, ( self.__class__.__name__, self.__class__.__bases__, str(self), ), self.__getstate__()) @classmethod def _reconstruct(cls, value): return str.__new__(cls, value) def fold(self, *, policy): header = parser.Header([ parser.HeaderLabel([ parser.ValueTerminal(self.name, 'header-name'), parser.ValueTerminal(':', 'header-sep')]), ]) if self._parse_tree: header.append( parser.CFWSList([parser.WhiteSpaceTerminal(' ', 'fws')])) header.append(self._parse_tree) return header.fold(policy=policy) def _reconstruct_header(cls_name, bases, value): return type(cls_name, bases, {})._reconstruct(value) class UnstructuredHeader: max_count = None value_parser = staticmethod(parser.get_unstructured) @classmethod def parse(cls, value, kwds): kwds['parse_tree'] = cls.value_parser(value) kwds['decoded'] = str(kwds['parse_tree']) class UniqueUnstructuredHeader(UnstructuredHeader): max_count = 1 class DateHeader: max_count = None value_parser = staticmethod(parser.get_unstructured) @classmethod def parse(cls, value, kwds): if not value: kwds['defects'].append(errors.HeaderMissingRequiredValue()) kwds['datetime'] = None kwds['decoded'] = '' kwds['parse_tree'] = parser.TokenList() return if isinstance(value, str): kwds['decoded'] = value try: value = utils.parsedate_to_datetime(value) except ValueError: kwds['defects'].append(errors.InvalidDateDefect('Invalid date value or format')) kwds['datetime'] = None kwds['parse_tree'] = parser.TokenList() return kwds['datetime'] = value kwds['decoded'] = utils.format_datetime(kwds['datetime']) kwds['parse_tree'] = cls.value_parser(kwds['decoded']) def init(self, *args, **kw): self._datetime = kw.pop('datetime') super().init(*args, **kw) @property def datetime(self): return self._datetime class UniqueDateHeader(DateHeader): max_count = 1 class AddressHeader: max_count = None @staticmethod def value_parser(value): address_list, value = parser.get_address_list(value) assert not value, 'this should not happen' return address_list @classmethod def parse(cls, value, kwds): if isinstance(value, str): kwds['parse_tree'] = address_list = cls.value_parser(value) groups = [] for addr in address_list.addresses: groups.append(Group(addr.display_name, [Address(mb.display_name or '', mb.local_part or '', mb.domain or '') for mb in addr.all_mailboxes])) defects = list(address_list.all_defects) else: if not hasattr(value, '__iter__'): value = [value] groups = [Group(None, [item]) if not hasattr(item, 'addresses') else item for item in value] defects = [] kwds['groups'] = groups kwds['defects'] = defects kwds['decoded'] = ', '.join([str(item) for item in groups]) if 'parse_tree' not in kwds: kwds['parse_tree'] = cls.value_parser(kwds['decoded']) def init(self, *args, **kw): self._groups = tuple(kw.pop('groups')) self._addresses = None super().init(*args, **kw) @property def groups(self): return self._groups @property def addresses(self): if self._addresses is None: self._addresses = tuple(address for group in self._groups for address in group.addresses) return self._addresses class UniqueAddressHeader(AddressHeader): max_count = 1 class SingleAddressHeader(AddressHeader): @property def address(self): if len(self.addresses)!=1: raise ValueError(("value of single address header {} is not " "a single address").format(self.name)) return self.addresses[0] class UniqueSingleAddressHeader(SingleAddressHeader): max_count = 1 class MIMEVersionHeader: max_count = 1 value_parser = staticmethod(parser.parse_mime_version) @classmethod def parse(cls, value, kwds): kwds['parse_tree'] = parse_tree = cls.value_parser(value) kwds['decoded'] = str(parse_tree) kwds['defects'].extend(parse_tree.all_defects) kwds['major'] = None if parse_tree.minor is None else parse_tree.major kwds['minor'] = parse_tree.minor if parse_tree.minor is not None: kwds['version'] = '{}.{}'.format(kwds['major'], kwds['minor']) else: kwds['version'] = None def init(self, *args, **kw): self._version = kw.pop('version') self._major = kw.pop('major') self._minor = kw.pop('minor') super().init(*args, **kw) @property def major(self): return self._major @property def minor(self): return self._minor @property def version(self): return self._version class ParameterizedMIMEHeader: max_count = 1 @classmethod def parse(cls, value, kwds): kwds['parse_tree'] = parse_tree = cls.value_parser(value) kwds['decoded'] = str(parse_tree) kwds['defects'].extend(parse_tree.all_defects) if parse_tree.params is None: kwds['params'] = {} else: kwds['params'] = {utils._sanitize(name).lower(): utils._sanitize(value) for name, value in parse_tree.params} def init(self, *args, **kw): self._params = kw.pop('params') super().init(*args, **kw) @property def params(self): return MappingProxyType(self._params) class ContentTypeHeader(ParameterizedMIMEHeader): value_parser = staticmethod(parser.parse_content_type_header) def init(self, *args, **kw): super().init(*args, **kw) self._maintype = utils._sanitize(self._parse_tree.maintype) self._subtype = utils._sanitize(self._parse_tree.subtype) @property def maintype(self): return self._maintype @property def subtype(self): return self._subtype @property def content_type(self): return self.maintype + '/' + self.subtype class ContentDispositionHeader(ParameterizedMIMEHeader): value_parser = staticmethod(parser.parse_content_disposition_header) def init(self, *args, **kw): super().init(*args, **kw) cd = self._parse_tree.content_disposition self._content_disposition = cd if cd is None else utils._sanitize(cd) @property def content_disposition(self): return self._content_disposition class ContentTransferEncodingHeader: max_count = 1 value_parser = staticmethod(parser.parse_content_transfer_encoding_header) @classmethod def parse(cls, value, kwds): kwds['parse_tree'] = parse_tree = cls.value_parser(value) kwds['decoded'] = str(parse_tree) kwds['defects'].extend(parse_tree.all_defects) def init(self, *args, **kw): super().init(*args, **kw) self._cte = utils._sanitize(self._parse_tree.cte) @property def cte(self): return self._cte class MessageIDHeader: max_count = 1 value_parser = staticmethod(parser.parse_message_id) @classmethod def parse(cls, value, kwds): kwds['parse_tree'] = parse_tree = cls.value_parser(value) kwds['decoded'] = str(parse_tree) kwds['defects'].extend(parse_tree.all_defects) _default_header_map = { 'subject': UniqueUnstructuredHeader, 'date': UniqueDateHeader, 'resent-date': DateHeader, 'orig-date': UniqueDateHeader, 'sender': UniqueSingleAddressHeader, 'resent-sender': SingleAddressHeader, 'to': UniqueAddressHeader, 'resent-to': AddressHeader, 'cc': UniqueAddressHeader, 'resent-cc': AddressHeader, 'bcc': UniqueAddressHeader, 'resent-bcc': AddressHeader, 'from': UniqueAddressHeader, 'resent-from': AddressHeader, 'reply-to': UniqueAddressHeader, 'mime-version': MIMEVersionHeader, 'content-type': ContentTypeHeader, 'content-disposition': ContentDispositionHeader, 'content-transfer-encoding': ContentTransferEncodingHeader, 'message-id': MessageIDHeader, } class HeaderRegistry: def __init__(self, base_class=BaseHeader, default_class=UnstructuredHeader, use_default_map=True): self.registry = {} self.base_class = base_class self.default_class = default_class if use_default_map: self.registry.update(_default_header_map) def map_to_type(self, name, cls): self.registry[name.lower()] = cls def __getitem__(self, name): cls = self.registry.get(name.lower(), self.default_class) return type('_'+cls.__name__, (cls, self.base_class), {}) def __call__(self, name, value): return self[name](name, value)
c 20012006 python software foundation barry warsaw contact emailsigpython org various types of useful iterators and generators all bodylineiterator typedsubpartiterator walk do not include structure since it s part of the debugging api import sys from io import stringio this function will become a method of the message class def walkself yield self if self ismultipart for subpart in self getpayload yield from subpart walk these two functions are imported into the iterators py interface module def bodylineiteratormsg decodefalse for subpart in msg walk payload subpart getpayloaddecodedecode if isinstancepayload str yield from stringiopayload def typedsubpartiteratormsg maintype text subtypenone for subpart in msg walk if subpart getcontentmaintype maintype if subtype is none or subpart getcontentsubtype subtype yield subpart def structuremsg fpnone level0 includedefaultfalse c 2001 2006 python software foundation barry warsaw contact email sig python org various types of useful iterators and generators do not include _structure since it s part of the debugging api this function will become a method of the message class walk over the message tree yielding each subpart the walk is performed in depth first order this method is a generator these two functions are imported into the iterators py interface module iterate over the parts returning string payloads line by line optional decode default false is passed through to get_payload iterate over the subparts with a given mime type use maintype as the main mime type to match against this defaults to text optional subtype is the mime subtype to match against if omitted only the main type is matched a handy debugging aid
__all__ = [ 'body_line_iterator', 'typed_subpart_iterator', 'walk', ] import sys from io import StringIO def walk(self): yield self if self.is_multipart(): for subpart in self.get_payload(): yield from subpart.walk() def body_line_iterator(msg, decode=False): for subpart in msg.walk(): payload = subpart.get_payload(decode=decode) if isinstance(payload, str): yield from StringIO(payload) def typed_subpart_iterator(msg, maintype='text', subtype=None): for subpart in msg.walk(): if subpart.get_content_maintype() == maintype: if subtype is None or subpart.get_content_subtype() == subtype: yield subpart def _structure(msg, fp=None, level=0, include_default=False): if fp is None: fp = sys.stdout tab = ' ' * (level * 4) print(tab + msg.get_content_type(), end='', file=fp) if include_default: print(' [%s]' % msg.get_default_type(), file=fp) else: print(file=fp) if msg.is_multipart(): for subpart in msg.get_payload(): _structure(subpart, fp, level+1, include_default)
c 20012007 python software foundation barry warsaw contact emailsigpython org basic message object for the email package object model all message emailmessage import binascii import re import quopri from io import bytesio stringio intrapackage imports from email import utils from email import errors from email policybase import compat32 from email import charset as charset from email encodedwords import decodeb charset charset charset semispace regular expression that matches special characters in parameters the existence of which force quoting of the parameter value tspecials re compiler def splitparamparam split header parameters baw this may be too simple it isn t strictly rfc 2045 section 5 1 compliant but it catches most headers found in the wild we may eventually need a full fledged parser rdm we might have a header here for now just stringify it a sep b strparam partition if not sep return a strip none return a strip b strip def formatparamparam valuenone quotetrue if value is not none and lenvalue 0 a tuple is used for rfc 2231 encoded parameter values where items are charset language value charset is a string not a charset instance rfc 2231 encoded values are never quoted per rfc if isinstancevalue tuple encode as per rfc 2231 param value utils encoderfc2231value2 value0 value1 return ss param value else try value encode ascii except unicodeencodeerror param value utils encoderfc2231value utf8 return ss param value baw please check this i think that if quote is set it should force quoting even if not necessary if quote or tspecials searchvalue return ss param utils quotevalue else return ss param value else return param def parseparams rdm this might be a header so for now stringify it s strs plist while s 1 s s1 end s find while end 0 and s count 0 end s count 0 end 2 end s find end 1 if end 0 end lens f s end if in f i f index f f i strip lower fi1 strip plist appendf strip s send return plist def unquotevaluevalue this is different than utils collapserfc2231value because it doesn t try to convert the value to a unicode message getparam and message getparams are both currently defined to return the tuple in the face of rfc 2231 parameters if isinstancevalue tuple return value0 value1 utils unquotevalue2 else return utils unquotevalue def decodeuuencoded workaround for broken uuencoders by fredrik lundh basic message object a message object is defined as something that has a bunch of rfc 2822 headers and a payload it may optionally have an envelope header a k a unixfrom or from header if the message is a container i e a multipart or a messagerfc822 then the payload is a list of message objects otherwise it is a string message objects implement part of the mapping interface which assumes there is exactly one occurrence of the header per message some headers do in fact appear multiple times e g received and for those headers you must use the explicit api to set or get all the headers not all of the mapping methods are implemented defaults for multipart messages default content type return the entire formatted message as a string return the entire formatted message as a string optional unixfrom when true means include the unix from envelope header for backward compatibility reasons if maxheaderlen is not specified it defaults to 0 so you must override it explicitly if you want a different maxheaderlen policy is passed to the generator instance used to serialize the message if it is not specified the policy associated with the message instance is used if the message object contains binary data that is not encoded according to rfc standards the noncompliant data will be replaced by unicode unknown character code points return the entire formatted message as a bytes object return the entire formatted message as a bytes object optional unixfrom when true means include the unix from envelope header policy is passed to the bytesgenerator instance used to serialize the message if not specified the policy associated with the message instance is used return true if the message consists of multiple parts return isinstanceself payload list unix from line def setunixfromself unixfrom self unixfrom unixfrom def getunixfromself return self unixfrom payload manipulation def attachself payload if self payload is none self payload payload else try self payload appendpayload except attributeerror raise typeerrorattach is not valid on a message with a nonmultipart payload def getpayloadself inone decodefalse here is the logic table for this code based on the email5 0 0 code i decode ismultipart result none true true none i true true none none false true payload a list i false true payload element i a message i false false error not a list i true false error not a list none false false payload none true false payload decoded bytes note that barry planned to factor out the decode case but that isn t so easy now that we handle the 8 bit data which needs to be converted in both the decode and nondecode path if self ismultipart if decode return none if i is none return self payload else return self payloadi for backward compatibility use isinstance and this error message instead of the more logical ismultipart test if i is not none and not isinstanceself payload list raise typeerror expected list got s typeself payload payload self payload cte might be a header so for now stringify it cte strself get contenttransferencoding lower payload may be bytes here if isinstancepayload str if utils hassurrogatespayload bpayload payload encode ascii surrogateescape if not decode try payload bpayload decodeself getparam charset ascii replace except lookuperror payload bpayload decode ascii replace elif decode try bpayload payload encode ascii except unicodeerror this won t happen for rfc compliant messages messages containing only ascii code points in the unicode input if it does happen turn the string into bytes in a way guaranteed not to fail bpayload payload encode rawunicodeescape if not decode return payload if cte quotedprintable return quopri decodestringbpayload elif cte base64 xxx this is a bit of a hack decodeb should probably be factored out somewhere but i haven t figured out where yet value defects decodebb joinbpayload splitlines for defect in defects self policy handledefectself defect return value elif cte in xuuencode uuencode uue xuue try return decodeuubpayload except valueerror some decoding problem return bpayload if isinstancepayload str return bpayload return payload def setpayloadself payload charsetnone if hasattrpayload encode if charset is none self payload payload return if not isinstancecharset charset charset charsetcharset payload payload encodecharset outputcharset if hasattrpayload decode self payload payload decode ascii surrogateescape else self payload payload if charset is not none self setcharsetcharset def setcharsetself charset if charset is none self delparam charset self charset none return if not isinstancecharset charset charset charsetcharset self charset charset if mimeversion not in self self addheader mimeversion 1 0 if contenttype not in self self addheader contenttype textplain charsetcharset getoutputcharset else self setparam charset charset getoutputcharset if charset charset getoutputcharset self payload charset bodyencodeself payload if contenttransferencoding not in self cte charset getbodyencoding try cteself except typeerror this if is for backward compatibility it allows unicode through even though that won t work correctly if the message is serialized payload self payload if payload try payload payload encode ascii surrogateescape except unicodeerror payload payload encodecharset outputcharset self payload charset bodyencodepayload self addheader contenttransferencoding cte def getcharsetself return self charset mapping interface partial def lenself get a header value return none if the header is missing instead of raising an exception note that if the header appeared multiple times exactly which occurrence gets returned is undefined use getall to get all the values matching a header field name set the value of a header note this does not overwrite an existing header with the same field name use delitem first to delete any existing headers delete all occurrences of a header if present does not raise an exception if the header is missing return a list of all the message s header field names these will be sorted in the order they appeared in the original message or were added to the message and may contain duplicates any fields deleted and reinserted are always appended to the header list return a list of all the message s header values these will be sorted in the order they appeared in the original message or were added to the message and may contain duplicates any fields deleted and reinserted are always appended to the header list get all the message s header fields and values these will be sorted in the order they appeared in the original message or were added to the message and may contain duplicates any fields deleted and reinserted are always appended to the header list get a header value like getitem but return failobj instead of none when the field is missing internal methods public api but only intended for use by a parser or generator not normal application code store name and value in the model without modification this is an internal api intended only for use by a parser return the name value header pairs without modification this is an internal api intended only for use by a generator additional useful stuff return a list of all the values for the named field these will be sorted in the order they appeared in the original message and may contain duplicates any fields deleted and reinserted are always appended to the header list if no such fields exist failobj is returned defaults to none extended header setting name is the header field to add keyword arguments can be used to set additional parameters for the header field with underscores converted to dashes normally the parameter will be added as keyvalue unless value is none in which case only the key will be added if a parameter value contains nonascii characters it can be specified as a threetuple of charset language value in which case it will be encoded according to rfc2231 rules otherwise it will be encoded using the utf8 charset and a language of examples msg addheader contentdisposition attachment filename bud gif msg addheader contentdisposition attachment filename utf8 fuballer ppt msg addheader contentdisposition attachment filename fuballer ppt replace a header replace the first matching header found in the message retaining header order and case if no matching header was found a keyerror is raised use these three methods instead of the three above return the message s content type the returned string is coerced to lower case of the form maintypesubtype if there was no contenttype header in the message the default type as given by getdefaulttype will be returned since according to rfc 2045 messages always have a default type this will always return a value rfc 2045 defines a message s default type to be textplain unless it appears inside a multipartdigest container in which case it would be messagerfc822 this should have no parameters rfc 2045 section 5 2 says if its invalid use textplain return the message s main content type this is the maintype part of the string returned by getcontenttype returns the message s subcontent type this is the subtype part of the string returned by getcontenttype return the default content type most messages have a default content type of textplain except for messages that are subparts of multipartdigest containers such subparts have a default content type of messagerfc822 set the default content type ctype should be either textplain or messagerfc822 although this is not enforced the default content type is not stored in the contenttype header like getparams but preserves the quoting of values baw should this be part of the public interface must have been a bare attribute return the message s contenttype parameters as a list the elements of the returned list are 2tuples of keyvalue pairs as split on the sign the left hand side of the is the key while the right hand side is the value if there is no sign in the parameter the value is the empty string the value is as described in the getparam method optional failobj is the object to return if there is no contenttype header optional header is the header to search instead of contenttype if unquote is true the value is unquoted return the parameter value if found in the contenttype header optional failobj is the object to return if there is no contenttype header or the contenttype header has no such parameter optional header is the header to search instead of contenttype parameter keys are always compared case insensitively the return value can either be a string or a 3tuple if the parameter was rfc 2231 encoded when it s a 3tuple the elements of the value are of the form charset language value note that both charset and language can be none in which case you should consider value to be encoded in the usascii charset you can usually ignore language the parameter value either the returned string or the value item in the 3tuple is always unquoted unless unquote is set to false if your application doesn t care whether the parameter was rfc 2231 encoded it can turn the return value into a string as follows rawparam msg getparam foo param email utils collapserfc2231valuerawparam set a parameter in the contenttype header if the parameter already exists in the header its value will be replaced with the new value if header is contenttype and has not yet been defined for this message it will be set to textplain and the new parameter and value will be appended as per rfc 2045 an alternate header can be specified in the header argument and all parameters will be quoted as necessary unless requote is false if charset is specified the parameter will be encoded according to rfc 2231 optional language specifies the rfc 2231 language defaulting to the empty string both charset and language should be strings remove the given parameter completely from the contenttype header the header will be rewritten in place without the parameter or its value all values will be quoted as necessary unless requote is false optional header specifies an alternative to the contenttype header set the main type and subtype for the contenttype header type must be a string in the form maintypesubtype otherwise a valueerror is raised this method replaces the contenttype header keeping all the parameters in place if requote is false this leaves the existing header s quoting as is otherwise the parameters will be quoted the default an alternative header can be specified in the header argument when the contenttype header is set we ll always also add a mimeversion header baw should we be strict set the contenttype you get a mimeversion skip the first param it s the old type return the filename associated with the payload if present the filename is extracted from the contentdisposition header s filename parameter and it is unquoted if that header is missing the filename parameter this method falls back to looking for the name parameter return the boundary associated with the payload if present the boundary is extracted from the contenttype header s boundary parameter and it is unquoted rfc 2046 says that boundaries may begin but not end in ws set the boundary parameter in contenttype to boundary this is subtly different than deleting the contenttype header and adding a new one with a new boundary parameter via addheader the main difference is that using the setboundary method preserves the order of the contenttype header in the original message headerparseerror is raised if the message has no contenttype header there was no contenttype header and we don t know what type to set it to so raise an exception the original contenttype header had no boundary attribute tack one on the end baw should we raise an exception instead replace the existing contenttype header with the new value return the charset parameter of the contenttype header the returned string is always coerced to lower case if there is no contenttype header or if that header has no charset parameter failobj is returned rfc 2231 encoded so decode it and it better end up as ascii lookuperror will be raised if the charset isn t known to python unicodeerror will be raised if the encoded text contains a character not in the charset charset characters must be in usascii range rfc 2046 4 1 2 says charsets are not case sensitive return a list containing the charsets used in this message the returned list of items describes the contenttype headers charset parameter for this message and all the subparts in its payload each item will either be a string the value of the charset parameter in the contenttype header of that part or the value of the failobj parameter defaults to none if the part does not have a main mime type of text or the charset is not defined the list will contain one string for each part of the message plus one for the container message i e self so that a nonmultipart message will still return a list of length 1 return the message s contentdisposition if it exists or none the return values can be either inline attachment or none according to the rfc2183 i e def walkself return the entire formatted message as a string optional unixfrom when true means include the unix from envelope header maxheaderlen is retained for backward compatibility with the base message class but defaults to none meaning that the policy value for maxlinelength controls the header maximum length policy is passed to the generator instance used to serialize the message if it is not specified the policy associated with the message instance is used return best candidate mime part for display as body of message do a depth first search starting with self looking for the first part matching each of the items in preferencelist and return the part corresponding to the first item that has a match or none if no items have a match if related is not included in preferencelist consider the root part of any multipartrelated encountered as a candidate match ignore parts with contentdisposition attachment return an iterator over the nonmain parts of a multipart skip the first of each occurrence of textplain texthtml multipartrelated or multipartalternative in the multipart unless they have a contentdisposition attachment header and include all remaining subparts in the returned iterator when applied to a multipartrelated return all parts except the root part return an empty iterator when applied to a multipartalternative or a nonmultipart certain malformed messages can have content type set to multipart but still have single part body in which case payload copy can fail with attributeerror payload is not a list it is most probably a string for related we treat everything but the root as an attachment the root may be indicated by start if there s no start or we can t find the named start treat the first subpart as the root otherwise we more or less invert the remaining logic in getbody this only really works in edge cases ex nontext related or alternatives if the sending agent sets contentdisposition return an iterator over all immediate subparts of a multipart return an empty iterator for a nonmultipart there is existing content move it to the first subpart c 2001 2007 python software foundation barry warsaw contact email sig python org basic message object for the email package object model intrapackage imports regular expression that matches special characters in parameters the existence of which force quoting of the parameter value split header parameters baw this may be too simple it isn t strictly rfc 2045 section 5 1 compliant but it catches most headers found in the wild we may eventually need a full fledged parser rdm we might have a header here for now just stringify it convenience function to format and return a key value pair this will quote the value if needed or if quote is true if value is a three tuple charset language value it will be encoded according to rfc2231 rules if it contains non ascii characters it will likewise be encoded according to rfc2231 rules using the utf 8 charset and a null language a tuple is used for rfc 2231 encoded parameter values where items are charset language value charset is a string not a charset instance rfc 2231 encoded values are never quoted per rfc encode as per rfc 2231 baw please check this i think that if quote is set it should force quoting even if not necessary rdm this might be a header so for now stringify it this is different than utils collapse_rfc2231_value because it doesn t try to convert the value to a unicode message get_param and message get_params are both currently defined to return the tuple in the face of rfc 2231 parameters decode uuencoded data workaround for broken uuencoders by fredrik lundh basic message object a message object is defined as something that has a bunch of rfc 2822 headers and a payload it may optionally have an envelope header a k a unix from or from_ header if the message is a container i e a multipart or a message rfc822 then the payload is a list of message objects otherwise it is a string message objects implement part of the mapping interface which assumes there is exactly one occurrence of the header per message some headers do in fact appear multiple times e g received and for those headers you must use the explicit api to set or get all the headers not all of the mapping methods are implemented defaults for multipart messages default content type return the entire formatted message as a string return the entire formatted message as a string optional unixfrom when true means include the unix from_ envelope header for backward compatibility reasons if maxheaderlen is not specified it defaults to 0 so you must override it explicitly if you want a different maxheaderlen policy is passed to the generator instance used to serialize the message if it is not specified the policy associated with the message instance is used if the message object contains binary data that is not encoded according to rfc standards the non compliant data will be replaced by unicode unknown character code points return the entire formatted message as a bytes object return the entire formatted message as a bytes object optional unixfrom when true means include the unix from_ envelope header policy is passed to the bytesgenerator instance used to serialize the message if not specified the policy associated with the message instance is used return true if the message consists of multiple parts unix from_ line payload manipulation add the given payload to the current payload the current payload will always be a list of objects after this method is called if you want to set the payload to a scalar object use set_payload instead return a reference to the payload the payload will either be a list object or a string if you mutate the list object you modify the message s payload in place optional i returns that index into the payload optional decode is a flag indicating whether the payload should be decoded or not according to the content transfer encoding header default is false when true and the message is not a multipart the payload will be decoded if this header s value is quoted printable or base64 if some other encoding is used or the header is missing or if the payload has bogus data i e bogus base64 or uuencoded data the payload is returned as is if the message is a multipart and the decode flag is true then none is returned here is the logic table for this code based on the email5 0 0 code i decode is_multipart result none true true none i true true none none false true _payload a list i false true _payload element i a message i false false error not a list i true false error not a list none false false _payload none true false _payload decoded bytes note that barry planned to factor out the decode case but that isn t so easy now that we handle the 8 bit data which needs to be converted in both the decode and non decode path for backward compatibility use isinstance and this error message instead of the more logical is_multipart test cte might be a header so for now stringify it payload may be bytes here this won t happen for rfc compliant messages messages containing only ascii code points in the unicode input if it does happen turn the string into bytes in a way guaranteed not to fail xxx this is a bit of a hack decode_b should probably be factored out somewhere but i haven t figured out where yet some decoding problem set the payload to the given value optional charset sets the message s default character set see set_charset for details set the charset of the payload to a given character set charset can be a charset instance a string naming a character set or none if it is a string it will be converted to a charset instance if charset is none the charset parameter will be removed from the content type field anything else will generate a typeerror the message will be assumed to be of type text encoded with charset input_charset it will be converted to charset output_charset and encoded properly if needed when generating the plain text representation of the message mime headers mime version content type content transfer encoding will be added as needed this if is for backward compatibility it allows unicode through even though that won t work correctly if the message is serialized return the charset instance associated with the message s payload mapping interface partial return the total number of headers including duplicates get a header value return none if the header is missing instead of raising an exception note that if the header appeared multiple times exactly which occurrence gets returned is undefined use get_all to get all the values matching a header field name set the value of a header note this does not overwrite an existing header with the same field name use __delitem__ first to delete any existing headers delete all occurrences of a header if present does not raise an exception if the header is missing return a list of all the message s header field names these will be sorted in the order they appeared in the original message or were added to the message and may contain duplicates any fields deleted and re inserted are always appended to the header list return a list of all the message s header values these will be sorted in the order they appeared in the original message or were added to the message and may contain duplicates any fields deleted and re inserted are always appended to the header list get all the message s header fields and values these will be sorted in the order they appeared in the original message or were added to the message and may contain duplicates any fields deleted and re inserted are always appended to the header list get a header value like __getitem__ but return failobj instead of none when the field is missing internal methods public api but only intended for use by a parser or generator not normal application code store name and value in the model without modification this is an internal api intended only for use by a parser return the name value header pairs without modification this is an internal api intended only for use by a generator additional useful stuff return a list of all the values for the named field these will be sorted in the order they appeared in the original message and may contain duplicates any fields deleted and re inserted are always appended to the header list if no such fields exist failobj is returned defaults to none extended header setting name is the header field to add keyword arguments can be used to set additional parameters for the header field with underscores converted to dashes normally the parameter will be added as key value unless value is none in which case only the key will be added if a parameter value contains non ascii characters it can be specified as a three tuple of charset language value in which case it will be encoded according to rfc2231 rules otherwise it will be encoded using the utf 8 charset and a language of examples msg add_header content disposition attachment filename bud gif msg add_header content disposition attachment filename utf 8 fußballer ppt msg add_header content disposition attachment filename fußballer ppt replace a header replace the first matching header found in the message retaining header order and case if no matching header was found a keyerror is raised use these three methods instead of the three above return the message s content type the returned string is coerced to lower case of the form maintype subtype if there was no content type header in the message the default type as given by get_default_type will be returned since according to rfc 2045 messages always have a default type this will always return a value rfc 2045 defines a message s default type to be text plain unless it appears inside a multipart digest container in which case it would be message rfc822 this should have no parameters rfc 2045 section 5 2 says if its invalid use text plain return the message s main content type this is the maintype part of the string returned by get_content_type returns the message s sub content type this is the subtype part of the string returned by get_content_type return the default content type most messages have a default content type of text plain except for messages that are subparts of multipart digest containers such subparts have a default content type of message rfc822 set the default content type ctype should be either text plain or message rfc822 although this is not enforced the default content type is not stored in the content type header like get_params but preserves the quoting of values baw should this be part of the public interface must have been a bare attribute return the message s content type parameters as a list the elements of the returned list are 2 tuples of key value pairs as split on the sign the left hand side of the is the key while the right hand side is the value if there is no sign in the parameter the value is the empty string the value is as described in the get_param method optional failobj is the object to return if there is no content type header optional header is the header to search instead of content type if unquote is true the value is unquoted return the parameter value if found in the content type header optional failobj is the object to return if there is no content type header or the content type header has no such parameter optional header is the header to search instead of content type parameter keys are always compared case insensitively the return value can either be a string or a 3 tuple if the parameter was rfc 2231 encoded when it s a 3 tuple the elements of the value are of the form charset language value note that both charset and language can be none in which case you should consider value to be encoded in the us ascii charset you can usually ignore language the parameter value either the returned string or the value item in the 3 tuple is always unquoted unless unquote is set to false if your application doesn t care whether the parameter was rfc 2231 encoded it can turn the return value into a string as follows rawparam msg get_param foo param email utils collapse_rfc2231_value rawparam set a parameter in the content type header if the parameter already exists in the header its value will be replaced with the new value if header is content type and has not yet been defined for this message it will be set to text plain and the new parameter and value will be appended as per rfc 2045 an alternate header can be specified in the header argument and all parameters will be quoted as necessary unless requote is false if charset is specified the parameter will be encoded according to rfc 2231 optional language specifies the rfc 2231 language defaulting to the empty string both charset and language should be strings remove the given parameter completely from the content type header the header will be re written in place without the parameter or its value all values will be quoted as necessary unless requote is false optional header specifies an alternative to the content type header set the main type and subtype for the content type header type must be a string in the form maintype subtype otherwise a valueerror is raised this method replaces the content type header keeping all the parameters in place if requote is false this leaves the existing header s quoting as is otherwise the parameters will be quoted the default an alternative header can be specified in the header argument when the content type header is set we ll always also add a mime version header baw should we be strict set the content type you get a mime version skip the first param it s the old type return the filename associated with the payload if present the filename is extracted from the content disposition header s filename parameter and it is unquoted if that header is missing the filename parameter this method falls back to looking for the name parameter return the boundary associated with the payload if present the boundary is extracted from the content type header s boundary parameter and it is unquoted rfc 2046 says that boundaries may begin but not end in w s set the boundary parameter in content type to boundary this is subtly different than deleting the content type header and adding a new one with a new boundary parameter via add_header the main difference is that using the set_boundary method preserves the order of the content type header in the original message headerparseerror is raised if the message has no content type header there was no content type header and we don t know what type to set it to so raise an exception the original content type header had no boundary attribute tack one on the end baw should we raise an exception instead replace the existing content type header with the new value return the charset parameter of the content type header the returned string is always coerced to lower case if there is no content type header or if that header has no charset parameter failobj is returned rfc 2231 encoded so decode it and it better end up as ascii lookuperror will be raised if the charset isn t known to python unicodeerror will be raised if the encoded text contains a character not in the charset charset characters must be in us ascii range rfc 2046 4 1 2 says charsets are not case sensitive return a list containing the charset s used in this message the returned list of items describes the content type headers charset parameter for this message and all the subparts in its payload each item will either be a string the value of the charset parameter in the content type header of that part or the value of the failobj parameter defaults to none if the part does not have a main mime type of text or the charset is not defined the list will contain one string for each part of the message plus one for the container message i e self so that a non multipart message will still return a list of length 1 return the message s content disposition if it exists or none the return values can be either inline attachment or none according to the rfc2183 i e def walk self return the entire formatted message as a string optional unixfrom when true means include the unix from_ envelope header maxheaderlen is retained for backward compatibility with the base message class but defaults to none meaning that the policy value for max_line_length controls the header maximum length policy is passed to the generator instance used to serialize the message if it is not specified the policy associated with the message instance is used return best candidate mime part for display as body of message do a depth first search starting with self looking for the first part matching each of the items in preferencelist and return the part corresponding to the first item that has a match or none if no items have a match if related is not included in preferencelist consider the root part of any multipart related encountered as a candidate match ignore parts with content disposition attachment return an iterator over the non main parts of a multipart skip the first of each occurrence of text plain text html multipart related or multipart alternative in the multipart unless they have a content disposition attachment header and include all remaining subparts in the returned iterator when applied to a multipart related return all parts except the root part return an empty iterator when applied to a multipart alternative or a non multipart certain malformed messages can have content type set to multipart but still have single part body in which case payload copy can fail with attributeerror payload is not a list it is most probably a string for related we treat everything but the root as an attachment the root may be indicated by start if there s no start or we can t find the named start treat the first subpart as the root otherwise we more or less invert the remaining logic in get_body this only really works in edge cases ex non text related or alternatives if the sending agent sets content disposition only skip the first example of each candidate type return an iterator over all immediate subparts of a multipart return an empty iterator for a non multipart there is existing content move it to the first subpart
__all__ = ['Message', 'EmailMessage'] import binascii import re import quopri from io import BytesIO, StringIO from email import utils from email import errors from email._policybase import compat32 from email import charset as _charset from email._encoded_words import decode_b Charset = _charset.Charset SEMISPACE = '; ' tspecials = re.compile(r'[ \(\)<>@,;:\\"/\[\]\?=]') def _splitparam(param): a, sep, b = str(param).partition(';') if not sep: return a.strip(), None return a.strip(), b.strip() def _formatparam(param, value=None, quote=True): if value is not None and len(value) > 0: if isinstance(value, tuple): param += '*' value = utils.encode_rfc2231(value[2], value[0], value[1]) return '%s=%s' % (param, value) else: try: value.encode('ascii') except UnicodeEncodeError: param += '*' value = utils.encode_rfc2231(value, 'utf-8', '') return '%s=%s' % (param, value) if quote or tspecials.search(value): return '%s="%s"' % (param, utils.quote(value)) else: return '%s=%s' % (param, value) else: return param def _parseparam(s): s = ';' + str(s) plist = [] while s[:1] == ';': s = s[1:] end = s.find(';') while end > 0 and (s.count('"', 0, end) - s.count('\\"', 0, end)) % 2: end = s.find(';', end + 1) if end < 0: end = len(s) f = s[:end] if '=' in f: i = f.index('=') f = f[:i].strip().lower() + '=' + f[i+1:].strip() plist.append(f.strip()) s = s[end:] return plist def _unquotevalue(value): if isinstance(value, tuple): return value[0], value[1], utils.unquote(value[2]) else: return utils.unquote(value) def _decode_uu(encoded): decoded_lines = [] encoded_lines_iter = iter(encoded.splitlines()) for line in encoded_lines_iter: if line.startswith(b"begin "): mode, _, path = line.removeprefix(b"begin ").partition(b" ") try: int(mode, base=8) except ValueError: continue else: break else: raise ValueError("`begin` line not found") for line in encoded_lines_iter: if not line: raise ValueError("Truncated input") elif line.strip(b' \t\r\n\f') == b'end': break try: decoded_line = binascii.a2b_uu(line) except binascii.Error: nbytes = (((line[0]-32) & 63) * 4 + 5) // 3 decoded_line = binascii.a2b_uu(line[:nbytes]) decoded_lines.append(decoded_line) return b''.join(decoded_lines) class Message: def __init__(self, policy=compat32): self.policy = policy self._headers = [] self._unixfrom = None self._payload = None self._charset = None self.preamble = self.epilogue = None self.defects = [] self._default_type = 'text/plain' def __str__(self): return self.as_string() def as_string(self, unixfrom=False, maxheaderlen=0, policy=None): from email.generator import Generator policy = self.policy if policy is None else policy fp = StringIO() g = Generator(fp, mangle_from_=False, maxheaderlen=maxheaderlen, policy=policy) g.flatten(self, unixfrom=unixfrom) return fp.getvalue() def __bytes__(self): return self.as_bytes() def as_bytes(self, unixfrom=False, policy=None): from email.generator import BytesGenerator policy = self.policy if policy is None else policy fp = BytesIO() g = BytesGenerator(fp, mangle_from_=False, policy=policy) g.flatten(self, unixfrom=unixfrom) return fp.getvalue() def is_multipart(self): return isinstance(self._payload, list) def set_unixfrom(self, unixfrom): self._unixfrom = unixfrom def get_unixfrom(self): return self._unixfrom def attach(self, payload): if self._payload is None: self._payload = [payload] else: try: self._payload.append(payload) except AttributeError: raise TypeError("Attach is not valid on a message with a" " non-multipart payload") def get_payload(self, i=None, decode=False): if self.is_multipart(): if decode: return None if i is None: return self._payload else: return self._payload[i] if i is not None and not isinstance(self._payload, list): raise TypeError('Expected list, got %s' % type(self._payload)) payload = self._payload cte = str(self.get('content-transfer-encoding', '')).lower() if isinstance(payload, str): if utils._has_surrogates(payload): bpayload = payload.encode('ascii', 'surrogateescape') if not decode: try: payload = bpayload.decode(self.get_param('charset', 'ascii'), 'replace') except LookupError: payload = bpayload.decode('ascii', 'replace') elif decode: try: bpayload = payload.encode('ascii') except UnicodeError: bpayload = payload.encode('raw-unicode-escape') if not decode: return payload if cte == 'quoted-printable': return quopri.decodestring(bpayload) elif cte == 'base64': value, defects = decode_b(b''.join(bpayload.splitlines())) for defect in defects: self.policy.handle_defect(self, defect) return value elif cte in ('x-uuencode', 'uuencode', 'uue', 'x-uue'): try: return _decode_uu(bpayload) except ValueError: return bpayload if isinstance(payload, str): return bpayload return payload def set_payload(self, payload, charset=None): if hasattr(payload, 'encode'): if charset is None: self._payload = payload return if not isinstance(charset, Charset): charset = Charset(charset) payload = payload.encode(charset.output_charset) if hasattr(payload, 'decode'): self._payload = payload.decode('ascii', 'surrogateescape') else: self._payload = payload if charset is not None: self.set_charset(charset) def set_charset(self, charset): if charset is None: self.del_param('charset') self._charset = None return if not isinstance(charset, Charset): charset = Charset(charset) self._charset = charset if 'MIME-Version' not in self: self.add_header('MIME-Version', '1.0') if 'Content-Type' not in self: self.add_header('Content-Type', 'text/plain', charset=charset.get_output_charset()) else: self.set_param('charset', charset.get_output_charset()) if charset != charset.get_output_charset(): self._payload = charset.body_encode(self._payload) if 'Content-Transfer-Encoding' not in self: cte = charset.get_body_encoding() try: cte(self) except TypeError: payload = self._payload if payload: try: payload = payload.encode('ascii', 'surrogateescape') except UnicodeError: payload = payload.encode(charset.output_charset) self._payload = charset.body_encode(payload) self.add_header('Content-Transfer-Encoding', cte) def get_charset(self): return self._charset def __len__(self): return len(self._headers) def __getitem__(self, name): return self.get(name) def __setitem__(self, name, val): max_count = self.policy.header_max_count(name) if max_count: lname = name.lower() found = 0 for k, v in self._headers: if k.lower() == lname: found += 1 if found >= max_count: raise ValueError("There may be at most {} {} headers " "in a message".format(max_count, name)) self._headers.append(self.policy.header_store_parse(name, val)) def __delitem__(self, name): name = name.lower() newheaders = [] for k, v in self._headers: if k.lower() != name: newheaders.append((k, v)) self._headers = newheaders def __contains__(self, name): name_lower = name.lower() for k, v in self._headers: if name_lower == k.lower(): return True return False def __iter__(self): for field, value in self._headers: yield field def keys(self): return [k for k, v in self._headers] def values(self): return [self.policy.header_fetch_parse(k, v) for k, v in self._headers] def items(self): return [(k, self.policy.header_fetch_parse(k, v)) for k, v in self._headers] def get(self, name, failobj=None): name = name.lower() for k, v in self._headers: if k.lower() == name: return self.policy.header_fetch_parse(k, v) return failobj def set_raw(self, name, value): self._headers.append((name, value)) def raw_items(self): return iter(self._headers.copy()) def get_all(self, name, failobj=None): values = [] name = name.lower() for k, v in self._headers: if k.lower() == name: values.append(self.policy.header_fetch_parse(k, v)) if not values: return failobj return values def add_header(self, _name, _value, **_params): parts = [] for k, v in _params.items(): if v is None: parts.append(k.replace('_', '-')) else: parts.append(_formatparam(k.replace('_', '-'), v)) if _value is not None: parts.insert(0, _value) self[_name] = SEMISPACE.join(parts) def replace_header(self, _name, _value): _name = _name.lower() for i, (k, v) in zip(range(len(self._headers)), self._headers): if k.lower() == _name: self._headers[i] = self.policy.header_store_parse(k, _value) break else: raise KeyError(_name) def get_content_type(self): missing = object() value = self.get('content-type', missing) if value is missing: return self.get_default_type() ctype = _splitparam(value)[0].lower() if ctype.count('/') != 1: return 'text/plain' return ctype def get_content_maintype(self): ctype = self.get_content_type() return ctype.split('/')[0] def get_content_subtype(self): ctype = self.get_content_type() return ctype.split('/')[1] def get_default_type(self): return self._default_type def set_default_type(self, ctype): self._default_type = ctype def _get_params_preserve(self, failobj, header): missing = object() value = self.get(header, missing) if value is missing: return failobj params = [] for p in _parseparam(value): try: name, val = p.split('=', 1) name = name.strip() val = val.strip() except ValueError: name = p.strip() val = '' params.append((name, val)) params = utils.decode_params(params) return params def get_params(self, failobj=None, header='content-type', unquote=True): missing = object() params = self._get_params_preserve(missing, header) if params is missing: return failobj if unquote: return [(k, _unquotevalue(v)) for k, v in params] else: return params def get_param(self, param, failobj=None, header='content-type', unquote=True): if header not in self: return failobj for k, v in self._get_params_preserve(failobj, header): if k.lower() == param.lower(): if unquote: return _unquotevalue(v) else: return v return failobj def set_param(self, param, value, header='Content-Type', requote=True, charset=None, language='', replace=False): if not isinstance(value, tuple) and charset: value = (charset, language, value) if header not in self and header.lower() == 'content-type': ctype = 'text/plain' else: ctype = self.get(header) if not self.get_param(param, header=header): if not ctype: ctype = _formatparam(param, value, requote) else: ctype = SEMISPACE.join( [ctype, _formatparam(param, value, requote)]) else: ctype = '' for old_param, old_value in self.get_params(header=header, unquote=requote): append_param = '' if old_param.lower() == param.lower(): append_param = _formatparam(param, value, requote) else: append_param = _formatparam(old_param, old_value, requote) if not ctype: ctype = append_param else: ctype = SEMISPACE.join([ctype, append_param]) if ctype != self.get(header): if replace: self.replace_header(header, ctype) else: del self[header] self[header] = ctype def del_param(self, param, header='content-type', requote=True): if header not in self: return new_ctype = '' for p, v in self.get_params(header=header, unquote=requote): if p.lower() != param.lower(): if not new_ctype: new_ctype = _formatparam(p, v, requote) else: new_ctype = SEMISPACE.join([new_ctype, _formatparam(p, v, requote)]) if new_ctype != self.get(header): del self[header] self[header] = new_ctype def set_type(self, type, header='Content-Type', requote=True): if not type.count('/') == 1: raise ValueError if header.lower() == 'content-type': del self['mime-version'] self['MIME-Version'] = '1.0' if header not in self: self[header] = type return params = self.get_params(header=header, unquote=requote) del self[header] self[header] = type for p, v in params[1:]: self.set_param(p, v, header, requote) def get_filename(self, failobj=None): missing = object() filename = self.get_param('filename', missing, 'content-disposition') if filename is missing: filename = self.get_param('name', missing, 'content-type') if filename is missing: return failobj return utils.collapse_rfc2231_value(filename).strip() def get_boundary(self, failobj=None): missing = object() boundary = self.get_param('boundary', missing) if boundary is missing: return failobj return utils.collapse_rfc2231_value(boundary).rstrip() def set_boundary(self, boundary): missing = object() params = self._get_params_preserve(missing, 'content-type') if params is missing: raise errors.HeaderParseError('No Content-Type header found') newparams = [] foundp = False for pk, pv in params: if pk.lower() == 'boundary': newparams.append(('boundary', '"%s"' % boundary)) foundp = True else: newparams.append((pk, pv)) if not foundp: newparams.append(('boundary', '"%s"' % boundary)) newheaders = [] for h, v in self._headers: if h.lower() == 'content-type': parts = [] for k, v in newparams: if v == '': parts.append(k) else: parts.append('%s=%s' % (k, v)) val = SEMISPACE.join(parts) newheaders.append(self.policy.header_store_parse(h, val)) else: newheaders.append((h, v)) self._headers = newheaders def get_content_charset(self, failobj=None): missing = object() charset = self.get_param('charset', missing) if charset is missing: return failobj if isinstance(charset, tuple): pcharset = charset[0] or 'us-ascii' try: as_bytes = charset[2].encode('raw-unicode-escape') charset = str(as_bytes, pcharset) except (LookupError, UnicodeError): charset = charset[2] try: charset.encode('us-ascii') except UnicodeError: return failobj return charset.lower() def get_charsets(self, failobj=None): return [part.get_content_charset(failobj) for part in self.walk()] def get_content_disposition(self): value = self.get('content-disposition') if value is None: return None c_d = _splitparam(value)[0].lower() return c_d from email.iterators import walk class MIMEPart(Message): def __init__(self, policy=None): if policy is None: from email.policy import default policy = default super().__init__(policy) def as_string(self, unixfrom=False, maxheaderlen=None, policy=None): policy = self.policy if policy is None else policy if maxheaderlen is None: maxheaderlen = policy.max_line_length return super().as_string(unixfrom, maxheaderlen, policy) def __str__(self): return self.as_string(policy=self.policy.clone(utf8=True)) def is_attachment(self): c_d = self.get('content-disposition') return False if c_d is None else c_d.content_disposition == 'attachment' def _find_body(self, part, preferencelist): if part.is_attachment(): return maintype, subtype = part.get_content_type().split('/') if maintype == 'text': if subtype in preferencelist: yield (preferencelist.index(subtype), part) return if maintype != 'multipart' or not self.is_multipart(): return if subtype != 'related': for subpart in part.iter_parts(): yield from self._find_body(subpart, preferencelist) return if 'related' in preferencelist: yield (preferencelist.index('related'), part) candidate = None start = part.get_param('start') if start: for subpart in part.iter_parts(): if subpart['content-id'] == start: candidate = subpart break if candidate is None: subparts = part.get_payload() candidate = subparts[0] if subparts else None if candidate is not None: yield from self._find_body(candidate, preferencelist) def get_body(self, preferencelist=('related', 'html', 'plain')): best_prio = len(preferencelist) body = None for prio, part in self._find_body(self, preferencelist): if prio < best_prio: best_prio = prio body = part if prio == 0: break return body _body_types = {('text', 'plain'), ('text', 'html'), ('multipart', 'related'), ('multipart', 'alternative')} def iter_attachments(self): maintype, subtype = self.get_content_type().split('/') if maintype != 'multipart' or subtype == 'alternative': return payload = self.get_payload() try: parts = payload.copy() except AttributeError: return if maintype == 'multipart' and subtype == 'related': start = self.get_param('start') if start: found = False attachments = [] for part in parts: if part.get('content-id') == start: found = True else: attachments.append(part) if found: yield from attachments return parts.pop(0) yield from parts return seen = [] for part in parts: maintype, subtype = part.get_content_type().split('/') if ((maintype, subtype) in self._body_types and not part.is_attachment() and subtype not in seen): seen.append(subtype) continue yield part def iter_parts(self): if self.is_multipart(): yield from self.get_payload() def get_content(self, *args, content_manager=None, **kw): if content_manager is None: content_manager = self.policy.content_manager return content_manager.get_content(self, *args, **kw) def set_content(self, *args, content_manager=None, **kw): if content_manager is None: content_manager = self.policy.content_manager content_manager.set_content(self, *args, **kw) def _make_multipart(self, subtype, disallowed_subtypes, boundary): if self.get_content_maintype() == 'multipart': existing_subtype = self.get_content_subtype() disallowed_subtypes = disallowed_subtypes + (subtype,) if existing_subtype in disallowed_subtypes: raise ValueError("Cannot convert {} to {}".format( existing_subtype, subtype)) keep_headers = [] part_headers = [] for name, value in self._headers: if name.lower().startswith('content-'): part_headers.append((name, value)) else: keep_headers.append((name, value)) if part_headers: part = type(self)(policy=self.policy) part._headers = part_headers part._payload = self._payload self._payload = [part] else: self._payload = [] self._headers = keep_headers self['Content-Type'] = 'multipart/' + subtype if boundary is not None: self.set_param('boundary', boundary) def make_related(self, boundary=None): self._make_multipart('related', ('alternative', 'mixed'), boundary) def make_alternative(self, boundary=None): self._make_multipart('alternative', ('mixed',), boundary) def make_mixed(self, boundary=None): self._make_multipart('mixed', (), boundary) def _add_multipart(self, _subtype, *args, _disp=None, **kw): if (self.get_content_maintype() != 'multipart' or self.get_content_subtype() != _subtype): getattr(self, 'make_' + _subtype)() part = type(self)(policy=self.policy) part.set_content(*args, **kw) if _disp and 'content-disposition' not in part: part['Content-Disposition'] = _disp self.attach(part) def add_related(self, *args, **kw): self._add_multipart('related', *args, _disp='inline', **kw) def add_alternative(self, *args, **kw): self._add_multipart('alternative', *args, **kw) def add_attachment(self, *args, **kw): self._add_multipart('mixed', *args, _disp='attachment', **kw) def clear(self): self._headers = [] self._payload = None def clear_content(self): self._headers = [(n, v) for n, v in self._headers if not n.lower().startswith('content-')] self._payload = None class EmailMessage(MIMEPart): def set_content(self, *args, **kw): super().set_content(*args, **kw) if 'MIME-Version' not in self: self['MIME-Version'] = '1.0'
c 20012006 python software foundation keith dart contact emailsigpython org class representing application type mime documents all mimeapplication from email import encoders from email mime nonmultipart import mimenonmultipart class mimeapplicationmimenonmultipart create an application type mime document data contains the bytes for the raw application data subtype is the mime content type subtype defaulting to octetstream encoder is a function which will perform the actual encoding for transport of the application data defaulting to base64 encoding any additional keyword arguments are passed to the base class constructor which turns them into parameters on the contenttype header c 2001 2006 python software foundation keith dart contact email sig python org class representing application type mime documents class for generating application mime documents create an application type mime document _data contains the bytes for the raw application data _subtype is the mime content type subtype defaulting to octet stream _encoder is a function which will perform the actual encoding for transport of the application data defaulting to base64 encoding any additional keyword arguments are passed to the base class constructor which turns them into parameters on the content type header
__all__ = ["MIMEApplication"] from email import encoders from email.mime.nonmultipart import MIMENonMultipart class MIMEApplication(MIMENonMultipart): def __init__(self, _data, _subtype='octet-stream', _encoder=encoders.encode_base64, *, policy=None, **_params): if _subtype is None: raise TypeError('Invalid application MIME subtype') MIMENonMultipart.__init__(self, 'application', _subtype, policy=policy, **_params) self.set_payload(_data) _encoder(self)
c 20012007 python software foundation anthony baxter contact emailsigpython org class representing audio type mime documents all mimeaudio from email import encoders from email mime nonmultipart import mimenonmultipart class mimeaudiomimenonmultipart create an audio type mime document audiodata contains the bytes for the raw audio data if this data can be decoded as au wav aiff or aifc then the subtype will be automatically included in the contenttype header otherwise you can specify the specific audio subtype via the subtype parameter if subtype is not given and no subtype can be guessed a typeerror is raised encoder is a function which will perform the actual encoding for transport of the image data it takes one argument which is this image instance it should use getpayload and setpayload to change the payload to the encoded form it should also add any contenttransferencoding or other headers to the message as necessary the default encoding is base64 any additional keyword arguments are passed to the base class constructor which turns them into parameters on the contenttype header originally from the sndhdr module there are others in sndhdr that don t have mime types additional ones to be added to sndhdr midi mp3 realaudio wma try to identify a sound file type sndhdr what had a pretty cruddy interface unfortunately this is why we redo it here it would be easier to reverse engineer the unix file command and use the standard magic file as shipped with a modern unix riff len wave fmt len c 2001 2007 python software foundation anthony baxter contact email sig python org class representing audio type mime documents class for generating audio mime documents create an audio type mime document _audiodata contains the bytes for the raw audio data if this data can be decoded as au wav aiff or aifc then the subtype will be automatically included in the content type header otherwise you can specify the specific audio subtype via the _subtype parameter if _subtype is not given and no subtype can be guessed a typeerror is raised _encoder is a function which will perform the actual encoding for transport of the image data it takes one argument which is this image instance it should use get_payload and set_payload to change the payload to the encoded form it should also add any content transfer encoding or other headers to the message as necessary the default encoding is base64 any additional keyword arguments are passed to the base class constructor which turns them into parameters on the content type header originally from the sndhdr module there are others in sndhdr that don t have mime types additional ones to be added to sndhdr midi mp3 realaudio wma try to identify a sound file type sndhdr what had a pretty cruddy interface unfortunately this is why we re do it here it would be easier to reverse engineer the unix file command and use the standard magic file as shipped with a modern unix riff len wave fmt len
__all__ = ['MIMEAudio'] from email import encoders from email.mime.nonmultipart import MIMENonMultipart class MIMEAudio(MIMENonMultipart): def __init__(self, _audiodata, _subtype=None, _encoder=encoders.encode_base64, *, policy=None, **_params): if _subtype is None: _subtype = _what(_audiodata) if _subtype is None: raise TypeError('Could not find audio MIME subtype') MIMENonMultipart.__init__(self, 'audio', _subtype, policy=policy, **_params) self.set_payload(_audiodata) _encoder(self) _rules = [] def _what(data): for testfn in _rules: if res := testfn(data): return res else: return None def rule(rulefunc): _rules.append(rulefunc) return rulefunc @rule def _aiff(h): if not h.startswith(b'FORM'): return None if h[8:12] in {b'AIFC', b'AIFF'}: return 'x-aiff' else: return None @rule def _au(h): if h.startswith(b'.snd'): return 'basic' else: return None @rule def _wav(h): if not h.startswith(b'RIFF') or h[8:12] != b'WAVE' or h[12:16] != b'fmt ': return None else: return "x-wav"
c 20012006 python software foundation barry warsaw contact emailsigpython org base class for mime specializations all mimebase import email policy from email import message class mimebasemessage message this constructor adds a contenttype and a mimeversion header the contenttype header is taken from the maintype and subtype arguments additional parameters for this header are taken from the keyword arguments c 2001 2006 python software foundation barry warsaw contact email sig python org base class for mime specializations base class for mime specializations this constructor adds a content type and a mime version header the content type header is taken from the _maintype and _subtype arguments additional parameters for this header are taken from the keyword arguments
__all__ = ['MIMEBase'] import email.policy from email import message class MIMEBase(message.Message): def __init__(self, _maintype, _subtype, *, policy=None, **_params): if policy is None: policy = email.policy.compat32 message.Message.__init__(self, policy=policy) ctype = '%s/%s' % (_maintype, _subtype) self.add_header('Content-Type', ctype, **_params) self['MIME-Version'] = '1.0'
c 20012006 python software foundation barry warsaw contact emailsigpython org class representing message mime documents all mimemessage from email import message from email mime nonmultipart import mimenonmultipart class mimemessagemimenonmultipart create a message type mime document msg is a message object and must be an instance of message or a derived class of message otherwise a typeerror is raised optional subtype defines the subtype of the contained message the default is rfc822 this is defined by the mime standard even though the term rfc822 is technically outdated by rfc 2822 it s convenient to use this base class method we need to do it this way or we ll get an exception and be sure our default type is set correctly c 2001 2006 python software foundation barry warsaw contact email sig python org class representing message mime documents class representing message mime documents create a message type mime document _msg is a message object and must be an instance of message or a derived class of message otherwise a typeerror is raised optional _subtype defines the subtype of the contained message the default is rfc822 this is defined by the mime standard even though the term rfc822 is technically outdated by rfc 2822 it s convenient to use this base class method we need to do it this way or we ll get an exception and be sure our default type is set correctly
__all__ = ['MIMEMessage'] from email import message from email.mime.nonmultipart import MIMENonMultipart class MIMEMessage(MIMENonMultipart): def __init__(self, _msg, _subtype='rfc822', *, policy=None): MIMENonMultipart.__init__(self, 'message', _subtype, policy=policy) if not isinstance(_msg, message.Message): raise TypeError('Argument is not an instance of Message') message.Message.attach(self, _msg) self.set_default_type('message/rfc822')
c 20022006 python software foundation barry warsaw contact emailsigpython org base class for mime multipart type messages all mimemultipart from email mime base import mimebase class mimemultipartmimebase creates a multipart type message by default creates a multipartmixed message with proper contenttype and mimeversion headers subtype is the subtype of the multipart content type defaulting to mixed boundary is the multipart boundary string by default it is calculated as needed subparts is a sequence of initial subparts for the payload it must be an iterable object such as a list you can always attach new subparts to the message by using the attach method additional parameters for the contenttype header are taken from the keyword arguments or passed into the params argument initialise payload to an empty list as the message superclass s implementation of ismultipart assumes that payload is a list for multipart messages c 2002 2006 python software foundation barry warsaw contact email sig python org base class for mime multipart type messages base class for mime multipart type messages creates a multipart type message by default creates a multipart mixed message with proper content type and mime version headers _subtype is the subtype of the multipart content type defaulting to mixed boundary is the multipart boundary string by default it is calculated as needed _subparts is a sequence of initial subparts for the payload it must be an iterable object such as a list you can always attach new subparts to the message by using the attach method additional parameters for the content type header are taken from the keyword arguments or passed into the _params argument initialise _payload to an empty list as the message superclass s implementation of is_multipart assumes that _payload is a list for multipart messages
__all__ = ['MIMEMultipart'] from email.mime.base import MIMEBase class MIMEMultipart(MIMEBase): def __init__(self, _subtype='mixed', boundary=None, _subparts=None, *, policy=None, **_params): MIMEBase.__init__(self, 'multipart', _subtype, policy=policy, **_params) self._payload = [] if _subparts: for p in _subparts: self.attach(p) if boundary: self.set_boundary(boundary)
c 20022006 python software foundation barry warsaw contact emailsigpython org base class for mime type messages that are not multipart all mimenonmultipart from email import errors from email mime base import mimebase class mimenonmultipartmimebase the public api prohibits attaching multiple subparts to mimebase derived subtypes since none of them are by definition of content type multipart c 2002 2006 python software foundation barry warsaw contact email sig python org base class for mime type messages that are not multipart base class for mime non multipart type messages the public api prohibits attaching multiple subparts to mimebase derived subtypes since none of them are by definition of content type multipart
__all__ = ['MIMENonMultipart'] from email import errors from email.mime.base import MIMEBase class MIMENonMultipart(MIMEBase): def attach(self, payload): raise errors.MultipartConversionError( 'Cannot attach additional subparts to non-multipart/*')
c 20012006 python software foundation barry warsaw contact emailsigpython org class representing text type mime documents all mimetext from email mime nonmultipart import mimenonmultipart class mimetextmimenonmultipart create a text type mime document text is the string for this message object subtype is the mime sub content type defaulting to plain charset is the character set parameter added to the contenttype header this defaults to usascii note that as a sideeffect the contenttransferencoding header will also be set if no charset was specified check to see if there are nonascii characters present if not use usascii otherwise use utf8 xxx this can be removed once 7304 is fixed c 2001 2006 python software foundation barry warsaw contact email sig python org class representing text type mime documents class for generating text type mime documents create a text type mime document _text is the string for this message object _subtype is the mime sub content type defaulting to plain _charset is the character set parameter added to the content type header this defaults to us ascii note that as a side effect the content transfer encoding header will also be set if no _charset was specified check to see if there are non ascii characters present if not use us ascii otherwise use utf 8 xxx this can be removed once 7304 is fixed
__all__ = ['MIMEText'] from email.mime.nonmultipart import MIMENonMultipart class MIMEText(MIMENonMultipart): def __init__(self, _text, _subtype='plain', _charset=None, *, policy=None): if _charset is None: try: _text.encode('us-ascii') _charset = 'us-ascii' except UnicodeEncodeError: _charset = 'utf-8' MIMENonMultipart.__init__(self, 'text', _subtype, policy=policy, charset=str(_charset)) self.set_payload(_text, _charset)
c 20012007 python software foundation barry warsaw thomas wouters anthony baxter contact emailsigpython org a parser of rfc 2822 and mime email messages all parser headerparser bytesparser bytesheaderparser feedparser bytesfeedparser from io import stringio textiowrapper from email feedparser import feedparser bytesfeedparser from email policybase import compat32 class parser def initself classnone policycompat32 self class class self policy policy def parseself fp headersonlyfalse feedparser feedparserself class policyself policy if headersonly feedparser setheadersonly while data fp read8192 feedparser feeddata return feedparser close def parsestrself text headersonlyfalse return self parsestringiotext headersonlyheadersonly class headerparserparser def parseself fp headersonlytrue return parser parseself fp true def parsestrself text headersonlytrue return parser parsestrself text true class bytesparser def initself args kw self parser parserargs kw def parseself fp headersonlyfalse fp textiowrapperfp encoding ascii errors surrogateescape try return self parser parsefp headersonly finally fp detach def parsebytesself text headersonlyfalse text text decode ascii errors surrogateescape return self parser parsestrtext headersonly class bytesheaderparserbytesparser def parseself fp headersonlytrue return bytesparser parseself fp headersonlytrue def parsebytesself text headersonlytrue return bytesparser parsebytesself text headersonlytrue c 2001 2007 python software foundation barry warsaw thomas wouters anthony baxter contact email sig python org a parser of rfc 2822 and mime email messages parser of rfc 2822 and mime email messages creates an in memory object tree representing the email message which can then be manipulated and turned over to a generator to return the textual representation of the message the string must be formatted as a block of rfc 2822 headers and header continuation lines optionally preceded by a unix from header the header block is terminated either by the end of the string or by a blank line _class is the class to instantiate for new message objects when they must be created this class must have a constructor that can take zero arguments default is message message the policy keyword specifies a policy object that controls a number of aspects of the parser s operation the default policy maintains backward compatibility create a message structure from the data in a file reads all the data from the file and returns the root of the message structure optional headersonly is a flag specifying whether to stop parsing after reading the headers or not the default is false meaning it parses the entire contents of the file create a message structure from a string returns the root of the message structure optional headersonly is a flag specifying whether to stop parsing after reading the headers or not the default is false meaning it parses the entire contents of the file parser of binary rfc 2822 and mime email messages creates an in memory object tree representing the email message which can then be manipulated and turned over to a generator to return the textual representation of the message the input must be formatted as a block of rfc 2822 headers and header continuation lines optionally preceded by a unix from header the header block is terminated either by the end of the input or by a blank line _class is the class to instantiate for new message objects when they must be created this class must have a constructor that can take zero arguments default is message message create a message structure from the data in a binary file reads all the data from the file and returns the root of the message structure optional headersonly is a flag specifying whether to stop parsing after reading the headers or not the default is false meaning it parses the entire contents of the file create a message structure from a byte string returns the root of the message structure optional headersonly is a flag specifying whether to stop parsing after reading the headers or not the default is false meaning it parses the entire contents of the file
__all__ = ['Parser', 'HeaderParser', 'BytesParser', 'BytesHeaderParser', 'FeedParser', 'BytesFeedParser'] from io import StringIO, TextIOWrapper from email.feedparser import FeedParser, BytesFeedParser from email._policybase import compat32 class Parser: def __init__(self, _class=None, *, policy=compat32): self._class = _class self.policy = policy def parse(self, fp, headersonly=False): feedparser = FeedParser(self._class, policy=self.policy) if headersonly: feedparser._set_headersonly() while data := fp.read(8192): feedparser.feed(data) return feedparser.close() def parsestr(self, text, headersonly=False): return self.parse(StringIO(text), headersonly=headersonly) class HeaderParser(Parser): def parse(self, fp, headersonly=True): return Parser.parse(self, fp, True) def parsestr(self, text, headersonly=True): return Parser.parsestr(self, text, True) class BytesParser: def __init__(self, *args, **kw): self.parser = Parser(*args, **kw) def parse(self, fp, headersonly=False): fp = TextIOWrapper(fp, encoding='ascii', errors='surrogateescape') try: return self.parser.parse(fp, headersonly) finally: fp.detach() def parsebytes(self, text, headersonly=False): text = text.decode('ASCII', errors='surrogateescape') return self.parser.parsestr(text, headersonly) class BytesHeaderParser(BytesParser): def parse(self, fp, headersonly=True): return BytesParser.parse(self, fp, headersonly=True) def parsebytes(self, text, headersonly=True): return BytesParser.parsebytes(self, text, headersonly=True)
this will be the home for the policy that hooks in the new code that adds all the email6 features provisional the api extensions enabled by this policy are currently provisional refer to the documentation for details this policy adds new header parsing and folding algorithms instead of simple strings headers are custom objects with custom attributes depending on the type of the field the folding algorithm fully implements rfcs 2047 and 5322 in addition to the settable attributes listed above that apply to all policies this policy adds the following additional attributes utf8 if false the default message headers will be serialized as ascii using encoded words to encode any nonascii characters in the source strings if true the message headers will be serialized using utf8 and will not contain encoded words see rfc 6532 for more on this serialization format refoldsource if the value for a header in the message object came from the parsing of some source this attribute indicates whether or not a generator should refold that value when transforming the message back into stream form the possible values are none all source values use original folding long source values that have any line that is longer than maxlinelength will be refolded all all values are refolded the default is long headerfactory a callable that takes two arguments name and value where name is a header field name and value is an unfolded header field value and returns a stringlike object that represents that header a default headerfactory is provided that understands some of the rfc5322 header field types currently address fields and date fields have special treatment while all other fields are treated as unstructured this list will be completed before the extension is marked stable contentmanager an object with at least two methods getcontent and setcontent when the getcontent or setcontent method of a message object is called it calls the corresponding method of this object passing it the message object as its first argument and any arguments or keywords that were passed to it as additional arguments the default contentmanager is data email contentmanager rawdatamanager ensure that each new instance gets a unique header factory as opposed to clones which share the factory the implementation for this class returns the maxcount attribute from the specialized header class that would be used to construct a header of type name the logic of the next three methods is chosen such that it is possible to switch a message object between a compat32 policy and a policy derived from this class and have the results stay consistent this allows a message object constructed with this policy to be passed to a library that only handles compat32 objects or to receive such an object and convert it to use the newer style by just changing its policy it is also chosen because it postpones the relatively expensive full rfc5322 parse until as late as possible when parsing from source since in many applications only a few headers will actually be inspected the name is parsed as everything up to the and returned unmodified the value is determined by stripping leading whitespace off the remainder of the first line joining all subsequent lines together and stripping any trailing carriage return or linefeed characters this is the same as compat32 the name is returned unchanged if the input value has a name attribute and it matches the name ignoring case the value is returned unchanged otherwise the name and value are passed to headerfactory method and the resulting custom header object is returned as the value in this case a valueerror is raised if the input value contains cr or lf characters xxx this error message isn t quite right when we use splitlines see issue 22233 but i m not sure what should happen here if the value has a name attribute it is returned to unmodified otherwise the name and the value with any linesep characters removed are passed to the headerfactory method and the resulting custom header object is returned any surrogateescaped bytes get turned into the unicode unknowncharacter glyph we can t use splitlines here because it splits on more than r and n header folding is controlled by the refoldsource policy setting a value is considered to be a source value if and only if it does not have a name attribute having a name attribute means it is a header object of some sort if a source value needs to be refolded according to the policy it is converted into a custom header object by passing the name and the value with any linesep characters removed to the headerfactory method folding of a custom header object is done by calling its fold method with the current policy source values are split into lines using splitlines if the value is not to be refolded the lines are rejoined using the linesep from the policy and returned the exception is lines containing nonascii binary data in that case the value is refolded regardless of the refoldsource setting which causes the binary data to be cte encoded using the unknown8bit charset the same as fold if ctetype is 7bit except that the returned value is bytes if ctetype is 8bit nonascii binary data is converted back into bytes headers with binary data are not refolded regardless of the refoldheader setting since there is no way to know whether the binary data consists of single byte characters or multibyte characters if utf8 is true headers are encoded to utf8 otherwise to ascii with nonascii unicode rendered as encoded words make the default policy use the class default headerfactory provisional the api extensions enabled by this policy are currently provisional refer to the documentation for details this policy adds new header parsing and folding algorithms instead of simple strings headers are custom objects with custom attributes depending on the type of the field the folding algorithm fully implements rfcs 2047 and 5322 in addition to the settable attributes listed above that apply to all policies this policy adds the following additional attributes utf8 if false the default message headers will be serialized as ascii using encoded words to encode any non ascii characters in the source strings if true the message headers will be serialized using utf8 and will not contain encoded words see rfc 6532 for more on this serialization format refold_source if the value for a header in the message object came from the parsing of some source this attribute indicates whether or not a generator should refold that value when transforming the message back into stream form the possible values are none all source values use original folding long source values that have any line that is longer than max_line_length will be refolded all all values are refolded the default is long header_factory a callable that takes two arguments name and value where name is a header field name and value is an unfolded header field value and returns a string like object that represents that header a default header_factory is provided that understands some of the rfc5322 header field types currently address fields and date fields have special treatment while all other fields are treated as unstructured this list will be completed before the extension is marked stable content_manager an object with at least two methods get_content and set_content when the get_content or set_content method of a message object is called it calls the corresponding method of this object passing it the message object as its first argument and any arguments or keywords that were passed to it as additional arguments the default content_manager is data email contentmanager raw_data_manager ensure that each new instance gets a unique header factory as opposed to clones which share the factory the implementation for this class returns the max_count attribute from the specialized header class that would be used to construct a header of type name the logic of the next three methods is chosen such that it is possible to switch a message object between a compat32 policy and a policy derived from this class and have the results stay consistent this allows a message object constructed with this policy to be passed to a library that only handles compat32 objects or to receive such an object and convert it to use the newer style by just changing its policy it is also chosen because it postpones the relatively expensive full rfc5322 parse until as late as possible when parsing from source since in many applications only a few headers will actually be inspected the name is parsed as everything up to the and returned unmodified the value is determined by stripping leading whitespace off the remainder of the first line joining all subsequent lines together and stripping any trailing carriage return or linefeed characters this is the same as compat32 the name is returned unchanged if the input value has a name attribute and it matches the name ignoring case the value is returned unchanged otherwise the name and value are passed to header_factory method and the resulting custom header object is returned as the value in this case a valueerror is raised if the input value contains cr or lf characters xxx this error message isn t quite right when we use splitlines see issue 22233 but i m not sure what should happen here if the value has a name attribute it is returned to unmodified otherwise the name and the value with any linesep characters removed are passed to the header_factory method and the resulting custom header object is returned any surrogateescaped bytes get turned into the unicode unknown character glyph we can t use splitlines here because it splits on more than r and n header folding is controlled by the refold_source policy setting a value is considered to be a source value if and only if it does not have a name attribute having a name attribute means it is a header object of some sort if a source value needs to be refolded according to the policy it is converted into a custom header object by passing the name and the value with any linesep characters removed to the header_factory method folding of a custom header object is done by calling its fold method with the current policy source values are split into lines using splitlines if the value is not to be refolded the lines are rejoined using the linesep from the policy and returned the exception is lines containing non ascii binary data in that case the value is refolded regardless of the refold_source setting which causes the binary data to be cte encoded using the unknown 8bit charset the same as fold if cte_type is 7bit except that the returned value is bytes if cte_type is 8bit non ascii binary data is converted back into bytes headers with binary data are not refolded regardless of the refold_header setting since there is no way to know whether the binary data consists of single byte characters or multibyte characters if utf8 is true headers are encoded to utf8 otherwise to ascii with non ascii unicode rendered as encoded words make the default policy use the class default header_factory
import re import sys from email._policybase import Policy, Compat32, compat32, _extend_docstrings from email.utils import _has_surrogates from email.headerregistry import HeaderRegistry as HeaderRegistry from email.contentmanager import raw_data_manager from email.message import EmailMessage __all__ = [ 'Compat32', 'compat32', 'Policy', 'EmailPolicy', 'default', 'strict', 'SMTP', 'HTTP', ] linesep_splitter = re.compile(r'\n|\r') @_extend_docstrings class EmailPolicy(Policy): message_factory = EmailMessage utf8 = False refold_source = 'long' header_factory = HeaderRegistry() content_manager = raw_data_manager def __init__(self, **kw): if 'header_factory' not in kw: object.__setattr__(self, 'header_factory', HeaderRegistry()) super().__init__(**kw) def header_max_count(self, name): return self.header_factory[name].max_count def header_source_parse(self, sourcelines): name, value = sourcelines[0].split(':', 1) value = value.lstrip(' \t') + ''.join(sourcelines[1:]) return (name, value.rstrip('\r\n')) def header_store_parse(self, name, value): if hasattr(value, 'name') and value.name.lower() == name.lower(): return (name, value) if isinstance(value, str) and len(value.splitlines())>1: raise ValueError("Header values may not contain linefeed " "or carriage return characters") return (name, self.header_factory(name, value)) def header_fetch_parse(self, name, value): if hasattr(value, 'name'): return value value = ''.join(linesep_splitter.split(value)) return self.header_factory(name, value) def fold(self, name, value): return self._fold(name, value, refold_binary=True) def fold_binary(self, name, value): folded = self._fold(name, value, refold_binary=self.cte_type=='7bit') charset = 'utf8' if self.utf8 else 'ascii' return folded.encode(charset, 'surrogateescape') def _fold(self, name, value, refold_binary=False): if hasattr(value, 'name'): return value.fold(policy=self) maxlen = self.max_line_length if self.max_line_length else sys.maxsize lines = value.splitlines() refold = (self.refold_source == 'all' or self.refold_source == 'long' and (lines and len(lines[0])+len(name)+2 > maxlen or any(len(x) > maxlen for x in lines[1:]))) if refold or refold_binary and _has_surrogates(value): return self.header_factory(name, ''.join(lines)).fold(policy=self) return name + ': ' + self.linesep.join(lines) + self.linesep default = EmailPolicy() del default.header_factory strict = default.clone(raise_on_defect=True) SMTP = default.clone(linesep='\r\n') HTTP = default.clone(linesep='\r\n', max_line_length=None) SMTPUTF8 = SMTP.clone(utf8=True)
c 20012010 python software foundation barry warsaw contact emailsigpython org miscellaneous utilities all collapserfc2231value decodeparams decoderfc2231 encoderfc2231 formataddr formatdate formatdatetime getaddresses makemsgid mktimetz parseaddr parsedate parsedatetz parsedatetodatetime unquote import os import re import time import datetime import urllib parse from email parseaddr import quote from email parseaddr import addresslist as addresslist from email parseaddr import mktimetz from email parseaddr import parsedate parsedatetz parsedatetz commaspace emptystring uemptystring crlf rn tick specialsre re compiler escapesre re compiler def hassurrogatess this check is based on the fact that unless there are surrogates utf8 python s default encoding can encode any string this is the fastest way to check for surrogates see issue 11454 for timings how to deal with a string containing bytes before handing it to the application through the normal interface turn any escaped bytes into unicode unknown char if the escaped bytes happen to be utf8 they will instead get decoded even if they were invalid in the charset the source was supposed to be in this seems like it is not a bad thing a defect was still registered helpers the inverse of parseaddr this takes a 2tuple of the form realname emailaddress and returns the string value suitable for an rfc 2822 from to or cc header if the first element of pair is false then the second element is returned unmodified the optional charset is the character set that is used to encode realname in case realname is not ascii safe can be an instance of str or a charsetlike object which has a headerencode method default is utf8 the address must per rfc be ascii so raise a unicodeerror if it isn t lazy import to improve module import time return a list of realname email for each fieldvalue all commaspace joinstrv for v in fieldvalues a addresslistall return a addresslist def formattimetupleandzonetimetuple zone return s 02d s 04d 02d 02d 02d s mon tue wed thu fri sat sun timetuple6 timetuple2 jan feb mar apr may jun jul aug sep oct nov dec timetuple1 1 timetuple0 timetuple3 timetuple4 timetuple5 zone def formatdatetimevalnone localtimefalse usegmtfalse note we cannot use strftime because that honors the locale and rfc 2822 requires that day and month names be the english abbreviations if timeval is none timeval time time dt datetime datetime fromtimestamptimeval datetime timezone utc if localtime dt dt astimezone usegmt false elif not usegmt dt dt replacetzinfonone return formatdatetimedt usegmt def formatdatetimedt usegmtfalse now dt timetuple if usegmt if dt tzinfo is none or dt tzinfo datetime timezone utc raise valueerrorusegmt option requires a utc datetime zone gmt elif dt tzinfo is none zone 0000 else zone dt strftimez return formattimetupleandzonenow zone def makemsgididstringnone domainnone lazy imports to speedup module import time no other functions in email utils need these modules import random import socket timeval inttime time100 pid os getpid randint random getrandbits64 if idstring is none idstring else idstring idstring if domain is none domain socket getfqdn msgid d d dss timeval pid randint idstring domain return msgid def parsedatetodatetimedata parseddatetz parsedatetzdata if parseddatetz is none raise valueerror invalid date value or format s strdata dtuple tz parseddatetz if tz is none return datetime datetimedtuple 6 return datetime datetimedtuple 6 tzinfodatetime timezonedatetime timedeltasecondstz def parseaddraddr addrs addresslistaddr addresslist if not addrs return return addrs0 rfc822 unquote doesn t properly debackslashify in python pre2 3 def unquotestr rfc2231related functions parameter encoding and decoding decode string according to rfc 2231 parts s splittick 2 if lenparts 2 return none none s return parts def encoderfc2231s charsetnone languagenone s urllib parse quotes safe encodingcharset or ascii if charset is none and language is none return s if language is none language return s s s charset language s rfc2231continuation re compiler pnamew pnum09 re ascii def decodeparamsparams newparams params0 map parameter s name to a list of continuations the values are a 3tuple of the continuation number the string value and a flag specifying whether a particular segment is encoded rfc2231params for name value in params1 encoded name endswith value unquotevalue mo rfc2231continuation matchname if mo name num mo group name num if num is not none num intnum rfc2231params setdefaultname appendnum value encoded else newparams appendname s quotevalue if rfc2231params for name continuations in rfc2231params items value extended false sort by number continuations sort and now append all values in numerical order converting encodings for the encoded segments if any of the continuation names ends in a then the entire string after decoding segments and concatenating must have the charset and language specifiers at the beginning of the string for num s encoded in continuations if encoded decode as latin1 so the characters in s directly represent the percentencoded octet values collapserfc2231value treats this as an octet sequence s urllib parse unquotes encodinglatin1 extended true value appends value quoteemptystring joinvalue if extended charset language value decoderfc2231value newparams appendname charset language s value else newparams appendname s value return newparams def collapserfc2231valuevalue errors replace fallbackcharset usascii if not isinstancevalue tuple or lenvalue 3 return unquotevalue while value comes to us as a unicode string we need it to be a bytes object we do not want bytes normal utf8 decoder we want a straight interpretation of the string as character bytes charset language text value if charset is none issue 17369 if charsetlang is none decoderfc2231 couldn t parse the value so use the fallbackcharset charset fallbackcharset rawbytes bytestext rawunicodeescape try return strrawbytes charset errors except lookuperror charset is not a known codec return unquotetext datetime doesn t provide a localtime function yet so provide one code adapted from the patch in issue 9527 this may not be perfect but it is better than not having it def localtimedtnone isdstnone if isdst is not none import warnings warnings deprecated the isdst parameter to localtime message name is deprecated and slated for removal in python remove remove3 14 if dt is none dt datetime datetime now return dt astimezone c 2001 2010 python software foundation barry warsaw contact email sig python org miscellaneous utilities return true if s contains surrogate escaped binary data this check is based on the fact that unless there are surrogates utf8 python s default encoding can encode any string this is the fastest way to check for surrogates see issue 11454 for timings how to deal with a string containing bytes before handing it to the application through the normal interface turn any escaped bytes into unicode unknown char if the escaped bytes happen to be utf 8 they will instead get decoded even if they were invalid in the charset the source was supposed to be in this seems like it is not a bad thing a defect was still registered helpers the inverse of parseaddr this takes a 2 tuple of the form realname email_address and returns the string value suitable for an rfc 2822 from to or cc header if the first element of pair is false then the second element is returned unmodified the optional charset is the character set that is used to encode realname in case realname is not ascii safe can be an instance of str or a charset like object which has a header_encode method default is utf 8 the address must per rfc be ascii so raise a unicodeerror if it isn t lazy import to improve module import time return a list of realname email for each fieldvalue returns a date string as specified by rfc 2822 e g fri 09 nov 2001 01 08 47 0000 optional timeval if given is a floating point time value as accepted by gmtime and localtime otherwise the current time is used optional localtime is a flag that when true interprets timeval and returns a date relative to the local timezone instead of utc properly taking daylight savings time into account optional argument usegmt means that the timezone is written out as an ascii string not numeric one so gmt instead of 0000 this is needed for http and is only used when localtime false note we cannot use strftime because that honors the locale and rfc 2822 requires that day and month names be the english abbreviations turn a datetime into a date string as specified in rfc 2822 if usegmt is true dt must be an aware datetime with an offset of zero in this case gmt will be rendered instead of the normal 0000 required by rfc2822 this is to support http headers involving date stamps returns a string suitable for rfc 2822 compliant message id e g 142480216486 20800 16526388040877946887 nightshade la mastaler com optional idstring if given is a string used to strengthen the uniqueness of the message id optional domain if given provides the portion of the message id after the it defaults to the locally defined hostname lazy imports to speedup module import time no other functions in email utils need these modules parse addr into its constituent realname and email address parts return a tuple of realname and email address unless the parse fails in which case return a 2 tuple of rfc822 unquote doesn t properly de backslash ify in python pre 2 3 remove quotes from a string rfc2231 related functions parameter encoding and decoding decode string according to rfc 2231 encode string according to rfc 2231 if neither charset nor language is given then s is returned as is if charset is given but not language the string is encoded using the empty string for language decode parameters list according to rfc 2231 params is a sequence of 2 tuples containing param name string value map parameter s name to a list of continuations the values are a 3 tuple of the continuation number the string value and a flag specifying whether a particular segment is encoded sort by number and now append all values in numerical order converting encodings for the encoded segments if any of the continuation names ends in a then the entire string after decoding segments and concatenating must have the charset and language specifiers at the beginning of the string decode as latin 1 so the characters in s directly represent the percent encoded octet values collapse_rfc2231_value treats this as an octet sequence while value comes to us as a unicode string we need it to be a bytes object we do not want bytes normal utf 8 decoder we want a straight interpretation of the string as character bytes issue 17369 if charset lang is none decode_rfc2231 couldn t parse the value so use the fallback_charset charset is not a known codec datetime doesn t provide a localtime function yet so provide one code adapted from the patch in issue 9527 this may not be perfect but it is better than not having it return local time as an aware datetime object if called without arguments return current time otherwise dt argument should be a datetime instance and it is converted to the local time zone according to the system time zone database if dt is naive that is dt tzinfo is none it is assumed to be in local time the isdst parameter is ignored
__all__ = [ 'collapse_rfc2231_value', 'decode_params', 'decode_rfc2231', 'encode_rfc2231', 'formataddr', 'formatdate', 'format_datetime', 'getaddresses', 'make_msgid', 'mktime_tz', 'parseaddr', 'parsedate', 'parsedate_tz', 'parsedate_to_datetime', 'unquote', ] import os import re import time import datetime import urllib.parse from email._parseaddr import quote from email._parseaddr import AddressList as _AddressList from email._parseaddr import mktime_tz from email._parseaddr import parsedate, parsedate_tz, _parsedate_tz COMMASPACE = ', ' EMPTYSTRING = '' UEMPTYSTRING = '' CRLF = '\r\n' TICK = "'" specialsre = re.compile(r'[][\\()<>@,:;".]') escapesre = re.compile(r'[\\"]') def _has_surrogates(s): try: s.encode() return False except UnicodeEncodeError: return True def _sanitize(string): original_bytes = string.encode('utf-8', 'surrogateescape') return original_bytes.decode('utf-8', 'replace') def formataddr(pair, charset='utf-8'): name, address = pair address.encode('ascii') if name: try: name.encode('ascii') except UnicodeEncodeError: if isinstance(charset, str): from email.charset import Charset charset = Charset(charset) encoded_name = charset.header_encode(name) return "%s <%s>" % (encoded_name, address) else: quotes = '' if specialsre.search(name): quotes = '"' name = escapesre.sub(r'\\\g<0>', name) return '%s%s%s <%s>' % (quotes, name, quotes, address) return address def getaddresses(fieldvalues): all = COMMASPACE.join(str(v) for v in fieldvalues) a = _AddressList(all) return a.addresslist def _format_timetuple_and_zone(timetuple, zone): return '%s, %02d %s %04d %02d:%02d:%02d %s' % ( ['Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun'][timetuple[6]], timetuple[2], ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'][timetuple[1] - 1], timetuple[0], timetuple[3], timetuple[4], timetuple[5], zone) def formatdate(timeval=None, localtime=False, usegmt=False): if timeval is None: timeval = time.time() dt = datetime.datetime.fromtimestamp(timeval, datetime.timezone.utc) if localtime: dt = dt.astimezone() usegmt = False elif not usegmt: dt = dt.replace(tzinfo=None) return format_datetime(dt, usegmt) def format_datetime(dt, usegmt=False): now = dt.timetuple() if usegmt: if dt.tzinfo is None or dt.tzinfo != datetime.timezone.utc: raise ValueError("usegmt option requires a UTC datetime") zone = 'GMT' elif dt.tzinfo is None: zone = '-0000' else: zone = dt.strftime("%z") return _format_timetuple_and_zone(now, zone) def make_msgid(idstring=None, domain=None): import random import socket timeval = int(time.time()*100) pid = os.getpid() randint = random.getrandbits(64) if idstring is None: idstring = '' else: idstring = '.' + idstring if domain is None: domain = socket.getfqdn() msgid = '<%d.%d.%d%s@%s>' % (timeval, pid, randint, idstring, domain) return msgid def parsedate_to_datetime(data): parsed_date_tz = _parsedate_tz(data) if parsed_date_tz is None: raise ValueError('Invalid date value or format "%s"' % str(data)) *dtuple, tz = parsed_date_tz if tz is None: return datetime.datetime(*dtuple[:6]) return datetime.datetime(*dtuple[:6], tzinfo=datetime.timezone(datetime.timedelta(seconds=tz))) def parseaddr(addr): addrs = _AddressList(addr).addresslist if not addrs: return '', '' return addrs[0] def unquote(str): if len(str) > 1: if str.startswith('"') and str.endswith('"'): return str[1:-1].replace('\\\\', '\\').replace('\\"', '"') if str.startswith('<') and str.endswith('>'): return str[1:-1] return str def decode_rfc2231(s): parts = s.split(TICK, 2) if len(parts) <= 2: return None, None, s return parts def encode_rfc2231(s, charset=None, language=None): s = urllib.parse.quote(s, safe='', encoding=charset or 'ascii') if charset is None and language is None: return s if language is None: language = '' return "%s'%s'%s" % (charset, language, s) rfc2231_continuation = re.compile(r'^(?P<name>\w+)\*((?P<num>[0-9]+)\*?)?$', re.ASCII) def decode_params(params): new_params = [params[0]] rfc2231_params = {} for name, value in params[1:]: encoded = name.endswith('*') value = unquote(value) mo = rfc2231_continuation.match(name) if mo: name, num = mo.group('name', 'num') if num is not None: num = int(num) rfc2231_params.setdefault(name, []).append((num, value, encoded)) else: new_params.append((name, '"%s"' % quote(value))) if rfc2231_params: for name, continuations in rfc2231_params.items(): value = [] extended = False continuations.sort() for num, s, encoded in continuations: if encoded: s = urllib.parse.unquote(s, encoding="latin-1") extended = True value.append(s) value = quote(EMPTYSTRING.join(value)) if extended: charset, language, value = decode_rfc2231(value) new_params.append((name, (charset, language, '"%s"' % value))) else: new_params.append((name, '"%s"' % value)) return new_params def collapse_rfc2231_value(value, errors='replace', fallback_charset='us-ascii'): if not isinstance(value, tuple) or len(value) != 3: return unquote(value) charset, language, text = value if charset is None: charset = fallback_charset rawbytes = bytes(text, 'raw-unicode-escape') try: return str(rawbytes, charset, errors) except LookupError: return unquote(text) def localtime(dt=None, isdst=None): if isdst is not None: import warnings warnings._deprecated( "The 'isdst' parameter to 'localtime'", message='{name} is deprecated and slated for removal in Python {remove}', remove=(3, 14), ) if dt is None: dt = datetime.datetime.now() return dt.astimezone()
standard encodings package standard python encoding modules are stored in this package directory codec modules must have names corresponding to normalized encoding names as defined in the normalizeencoding function below e g utf8 must be implemented by the module utf8 py each codec module must export the following interface getregentry codecs codecinfo object the getregentry api must return a codecinfo object with encoder decoder incrementalencoder incrementaldecoder streamwriter and streamreader attributes which adhere to the python codec interface standard in addition a module may optionally also define the following apis which are then used by the package s codec search function getaliases sequence of encoding name strings to use as aliases alias names returned by getaliases must be normalized encoding names as defined by normalizeencoding written by marcandre lemburg mallemburg com c cnri no warranty normalize an encoding name normalization works as follows all nonalphanumeric characters except the dot used for python package names are collapsed and replaced with a single underscore e g becomes leading and trailing underscores are removed note that encoding names should be ascii only cache lookup import the module first try to find an alias for the normalized encoding name and lookup the module using the aliased name then try to lookup the module using the standard import scheme i e first try in the encodings package then at toplevel import is absolute to prevent the possibly malicious import of a module with sideeffects that is not in the encodings package importerror may occur because encodings modname does not exist or because it imports a name that does not exist see mbcs and oem not a codec module cache misses now ask the module for the registry entry cache the codec registry entry register its aliases without overwriting previously registered aliases return the registry entry register the searchfunction in the python codec registry bpo671666 bpo46668 if python does not implement a codec for current windows ansi code page use the mbcs codec instead widechartomultibyte and multibytetowidechar functions with cpacp python does not support custom code pages imports may fail while we are shutting down normalize an encoding name normalization works as follows all non alphanumeric characters except the dot used for python package names are collapsed and replaced with a single underscore e g becomes _ leading and trailing underscores are removed note that encoding names should be ascii only cache lookup import the module first try to find an alias for the normalized encoding name and lookup the module using the aliased name then try to lookup the module using the standard import scheme i e first try in the encodings package then at top level import is absolute to prevent the possibly malicious import of a module with side effects that is not in the encodings package importerror may occur because encodings modname does not exist or because it imports a name that does not exist see mbcs and oem not a codec module cache misses now ask the module for the registry entry cache the codec registry entry register its aliases without overwriting previously registered aliases return the registry entry register the search_function in the python codec registry bpo 671666 bpo 46668 if python does not implement a codec for current windows ansi code page use the mbcs codec instead widechartomultibyte and multibytetowidechar functions with cp_acp python does not support custom code pages imports may fail while we are shutting down
import codecs import sys from . import aliases _cache = {} _unknown = '--unknown--' _import_tail = ['*'] _aliases = aliases.aliases class CodecRegistryError(LookupError, SystemError): pass def normalize_encoding(encoding): if isinstance(encoding, bytes): encoding = str(encoding, "ascii") chars = [] punct = False for c in encoding: if c.isalnum() or c == '.': if punct and chars: chars.append('_') if c.isascii(): chars.append(c) punct = False else: punct = True return ''.join(chars) def search_function(encoding): entry = _cache.get(encoding, _unknown) if entry is not _unknown: return entry norm_encoding = normalize_encoding(encoding) aliased_encoding = _aliases.get(norm_encoding) or \ _aliases.get(norm_encoding.replace('.', '_')) if aliased_encoding is not None: modnames = [aliased_encoding, norm_encoding] else: modnames = [norm_encoding] for modname in modnames: if not modname or '.' in modname: continue try: mod = __import__('encodings.' + modname, fromlist=_import_tail, level=0) except ImportError: pass else: break else: mod = None try: getregentry = mod.getregentry except AttributeError: mod = None if mod is None: _cache[encoding] = None return None entry = getregentry() if not isinstance(entry, codecs.CodecInfo): if not 4 <= len(entry) <= 7: raise CodecRegistryError('module "%s" (%s) failed to register' % (mod.__name__, mod.__file__)) if not callable(entry[0]) or not callable(entry[1]) or \ (entry[2] is not None and not callable(entry[2])) or \ (entry[3] is not None and not callable(entry[3])) or \ (len(entry) > 4 and entry[4] is not None and not callable(entry[4])) or \ (len(entry) > 5 and entry[5] is not None and not callable(entry[5])): raise CodecRegistryError('incompatible codecs in module "%s" (%s)' % (mod.__name__, mod.__file__)) if len(entry)<7 or entry[6] is None: entry += (None,)*(6-len(entry)) + (mod.__name__.split(".", 1)[1],) entry = codecs.CodecInfo(*entry) _cache[encoding] = entry try: codecaliases = mod.getaliases() except AttributeError: pass else: for alias in codecaliases: if alias not in _aliases: _aliases[alias] = modname return entry codecs.register(search_function) if sys.platform == 'win32': def _alias_mbcs(encoding): try: import _winapi ansi_code_page = "cp%s" % _winapi.GetACP() if encoding == ansi_code_page: import encodings.mbcs return encodings.mbcs.getregentry() except ImportError: pass codecs.register(_alias_mbcs)
encoding aliases support this module is used by the encodings package search function to map encodings names to module names note that the search function normalizes the encoding names before doing the lookup so the mapping will have to map normalized encoding names to module names contents the following aliases dictionary contains mappings of all iana character set names for which the python core library provides codecs in addition to these a few python specific codec aliases have also been added please keep this list sorted alphabetically by value ascii codec base64codec codec big5 codec big5hkscs codec bz2codec codec cp037 codec cp1026 codec cp1125 codec cp1140 codec cp1250 codec cp1251 codec cp1252 codec cp1253 codec cp1254 codec cp1255 codec cp1256 codec cp1257 codec cp1258 codec cp273 codec cp424 codec cp437 codec cp500 codec cp775 codec cp850 codec cp852 codec cp855 codec cp857 codec cp858 codec cp860 codec cp861 codec cp862 codec cp863 codec cp864 codec cp865 codec cp866 codec cp869 codec cp932 codec cp949 codec cp950 codec eucjis2004 codec eucjisx0213 codec eucjp codec euckr codec gb18030 codec gb2312 codec gbk codec hexcodec codec hproman8 codec hz codec iso2022jp codec iso2022jp1 codec iso2022jp2 codec iso2022jp2004 codec iso2022jp3 codec iso2022jpext codec iso2022kr codec iso885910 codec iso885911 codec iso885913 codec iso885914 codec iso885915 codec iso885916 codec iso88592 codec iso88593 codec iso88594 codec iso88595 codec iso88596 codec iso88597 codec iso88598 codec iso88599 codec johab codec koi8r codec kz1048 codec latin1 codec note that the latin1 codec is implemented internally in c and a lot faster than the charmap codec iso88591 which uses the same encoding this is why we discourage the use of the iso88591 codec and alias it to latin1 instead maccyrillic codec macgreek codec maciceland codec maclatin2 codec macroman codec macturkish codec mbcs codec ptcp154 codec quopricodec codec rot13 codec shiftjis codec shiftjis2004 codec shiftjisx0213 codec tis620 codec utf16 codec utf16be codec utf16le codec utf32 codec utf32be codec utf32le codec utf7 codec utf8 codec uucodec codec zlibcodec codec temporary mac cjk aliases will be replaced by proper codecs in 3 1 please keep this list sorted alphabetically by value ascii codec some email headers use this non standard name base64_codec codec big5 codec big5hkscs codec bz2_codec codec cp037 codec cp1026 codec cp1125 codec cp1140 codec cp1250 codec cp1251 codec cp1252 codec cp1253 codec cp1254 codec cp1255 codec cp1256 codec cp1257 codec cp1258 codec cp273 codec cp424 codec cp437 codec cp500 codec cp775 codec cp850 codec cp852 codec cp855 codec cp857 codec cp858 codec cp860 codec cp861 codec cp862 codec cp863 codec cp864 codec cp865 codec cp866 codec cp869 codec cp932 codec cp949 codec cp950 codec euc_jis_2004 codec euc_jisx0213 codec euc_jp codec euc_kr codec gb18030 codec gb2312 codec gbk codec hex_codec codec hp_roman8 codec hz codec iso2022_jp codec iso2022_jp_1 codec iso2022_jp_2 codec iso2022_jp_2004 codec iso2022_jp_3 codec iso2022_jp_ext codec iso2022_kr codec iso8859_10 codec iso8859_11 codec iso8859_13 codec iso8859_14 codec iso8859_15 codec iso8859_16 codec iso8859_2 codec iso8859_3 codec iso8859_4 codec iso8859_5 codec iso8859_6 codec iso8859_7 codec iso8859_8 codec iso8859_9 codec johab codec koi8_r codec kz1048 codec latin_1 codec note that the latin_1 codec is implemented internally in c and a lot faster than the charmap codec iso8859_1 which uses the same encoding this is why we discourage the use of the iso8859_1 codec and alias it to latin_1 instead mac_cyrillic codec mac_greek codec mac_iceland codec mac_latin2 codec mac_roman codec mac_turkish codec mbcs codec ptcp154 codec quopri_codec codec rot_13 codec shift_jis codec shift_jis_2004 codec shift_jisx0213 codec tis_620 codec utf_16 codec utf_16_be codec utf_16_le codec utf_32 codec utf_32_be codec utf_32_le codec utf_7 codec utf_8 codec uu_codec codec zlib_codec codec temporary mac cjk aliases will be replaced by proper codecs in 3 1
aliases = { '646' : 'ascii', 'ansi_x3.4_1968' : 'ascii', 'ansi_x3_4_1968' : 'ascii', 'ansi_x3.4_1986' : 'ascii', 'cp367' : 'ascii', 'csascii' : 'ascii', 'ibm367' : 'ascii', 'iso646_us' : 'ascii', 'iso_646.irv_1991' : 'ascii', 'iso_ir_6' : 'ascii', 'us' : 'ascii', 'us_ascii' : 'ascii', 'base64' : 'base64_codec', 'base_64' : 'base64_codec', 'big5_tw' : 'big5', 'csbig5' : 'big5', 'big5_hkscs' : 'big5hkscs', 'hkscs' : 'big5hkscs', 'bz2' : 'bz2_codec', '037' : 'cp037', 'csibm037' : 'cp037', 'ebcdic_cp_ca' : 'cp037', 'ebcdic_cp_nl' : 'cp037', 'ebcdic_cp_us' : 'cp037', 'ebcdic_cp_wt' : 'cp037', 'ibm037' : 'cp037', 'ibm039' : 'cp037', '1026' : 'cp1026', 'csibm1026' : 'cp1026', 'ibm1026' : 'cp1026', '1125' : 'cp1125', 'ibm1125' : 'cp1125', 'cp866u' : 'cp1125', 'ruscii' : 'cp1125', '1140' : 'cp1140', 'ibm1140' : 'cp1140', '1250' : 'cp1250', 'windows_1250' : 'cp1250', '1251' : 'cp1251', 'windows_1251' : 'cp1251', '1252' : 'cp1252', 'windows_1252' : 'cp1252', '1253' : 'cp1253', 'windows_1253' : 'cp1253', '1254' : 'cp1254', 'windows_1254' : 'cp1254', '1255' : 'cp1255', 'windows_1255' : 'cp1255', '1256' : 'cp1256', 'windows_1256' : 'cp1256', '1257' : 'cp1257', 'windows_1257' : 'cp1257', '1258' : 'cp1258', 'windows_1258' : 'cp1258', '273' : 'cp273', 'ibm273' : 'cp273', 'csibm273' : 'cp273', '424' : 'cp424', 'csibm424' : 'cp424', 'ebcdic_cp_he' : 'cp424', 'ibm424' : 'cp424', '437' : 'cp437', 'cspc8codepage437' : 'cp437', 'ibm437' : 'cp437', '500' : 'cp500', 'csibm500' : 'cp500', 'ebcdic_cp_be' : 'cp500', 'ebcdic_cp_ch' : 'cp500', 'ibm500' : 'cp500', '775' : 'cp775', 'cspc775baltic' : 'cp775', 'ibm775' : 'cp775', '850' : 'cp850', 'cspc850multilingual' : 'cp850', 'ibm850' : 'cp850', '852' : 'cp852', 'cspcp852' : 'cp852', 'ibm852' : 'cp852', '855' : 'cp855', 'csibm855' : 'cp855', 'ibm855' : 'cp855', '857' : 'cp857', 'csibm857' : 'cp857', 'ibm857' : 'cp857', '858' : 'cp858', 'csibm858' : 'cp858', 'ibm858' : 'cp858', '860' : 'cp860', 'csibm860' : 'cp860', 'ibm860' : 'cp860', '861' : 'cp861', 'cp_is' : 'cp861', 'csibm861' : 'cp861', 'ibm861' : 'cp861', '862' : 'cp862', 'cspc862latinhebrew' : 'cp862', 'ibm862' : 'cp862', '863' : 'cp863', 'csibm863' : 'cp863', 'ibm863' : 'cp863', '864' : 'cp864', 'csibm864' : 'cp864', 'ibm864' : 'cp864', '865' : 'cp865', 'csibm865' : 'cp865', 'ibm865' : 'cp865', '866' : 'cp866', 'csibm866' : 'cp866', 'ibm866' : 'cp866', '869' : 'cp869', 'cp_gr' : 'cp869', 'csibm869' : 'cp869', 'ibm869' : 'cp869', '932' : 'cp932', 'ms932' : 'cp932', 'mskanji' : 'cp932', 'ms_kanji' : 'cp932', '949' : 'cp949', 'ms949' : 'cp949', 'uhc' : 'cp949', '950' : 'cp950', 'ms950' : 'cp950', 'jisx0213' : 'euc_jis_2004', 'eucjis2004' : 'euc_jis_2004', 'euc_jis2004' : 'euc_jis_2004', 'eucjisx0213' : 'euc_jisx0213', 'eucjp' : 'euc_jp', 'ujis' : 'euc_jp', 'u_jis' : 'euc_jp', 'euckr' : 'euc_kr', 'korean' : 'euc_kr', 'ksc5601' : 'euc_kr', 'ks_c_5601' : 'euc_kr', 'ks_c_5601_1987' : 'euc_kr', 'ksx1001' : 'euc_kr', 'ks_x_1001' : 'euc_kr', 'gb18030_2000' : 'gb18030', 'chinese' : 'gb2312', 'csiso58gb231280' : 'gb2312', 'euc_cn' : 'gb2312', 'euccn' : 'gb2312', 'eucgb2312_cn' : 'gb2312', 'gb2312_1980' : 'gb2312', 'gb2312_80' : 'gb2312', 'iso_ir_58' : 'gb2312', '936' : 'gbk', 'cp936' : 'gbk', 'ms936' : 'gbk', 'hex' : 'hex_codec', 'roman8' : 'hp_roman8', 'r8' : 'hp_roman8', 'csHPRoman8' : 'hp_roman8', 'cp1051' : 'hp_roman8', 'ibm1051' : 'hp_roman8', 'hzgb' : 'hz', 'hz_gb' : 'hz', 'hz_gb_2312' : 'hz', 'csiso2022jp' : 'iso2022_jp', 'iso2022jp' : 'iso2022_jp', 'iso_2022_jp' : 'iso2022_jp', 'iso2022jp_1' : 'iso2022_jp_1', 'iso_2022_jp_1' : 'iso2022_jp_1', 'iso2022jp_2' : 'iso2022_jp_2', 'iso_2022_jp_2' : 'iso2022_jp_2', 'iso_2022_jp_2004' : 'iso2022_jp_2004', 'iso2022jp_2004' : 'iso2022_jp_2004', 'iso2022jp_3' : 'iso2022_jp_3', 'iso_2022_jp_3' : 'iso2022_jp_3', 'iso2022jp_ext' : 'iso2022_jp_ext', 'iso_2022_jp_ext' : 'iso2022_jp_ext', 'csiso2022kr' : 'iso2022_kr', 'iso2022kr' : 'iso2022_kr', 'iso_2022_kr' : 'iso2022_kr', 'csisolatin6' : 'iso8859_10', 'iso_8859_10' : 'iso8859_10', 'iso_8859_10_1992' : 'iso8859_10', 'iso_ir_157' : 'iso8859_10', 'l6' : 'iso8859_10', 'latin6' : 'iso8859_10', 'thai' : 'iso8859_11', 'iso_8859_11' : 'iso8859_11', 'iso_8859_11_2001' : 'iso8859_11', 'iso_8859_13' : 'iso8859_13', 'l7' : 'iso8859_13', 'latin7' : 'iso8859_13', 'iso_8859_14' : 'iso8859_14', 'iso_8859_14_1998' : 'iso8859_14', 'iso_celtic' : 'iso8859_14', 'iso_ir_199' : 'iso8859_14', 'l8' : 'iso8859_14', 'latin8' : 'iso8859_14', 'iso_8859_15' : 'iso8859_15', 'l9' : 'iso8859_15', 'latin9' : 'iso8859_15', 'iso_8859_16' : 'iso8859_16', 'iso_8859_16_2001' : 'iso8859_16', 'iso_ir_226' : 'iso8859_16', 'l10' : 'iso8859_16', 'latin10' : 'iso8859_16', 'csisolatin2' : 'iso8859_2', 'iso_8859_2' : 'iso8859_2', 'iso_8859_2_1987' : 'iso8859_2', 'iso_ir_101' : 'iso8859_2', 'l2' : 'iso8859_2', 'latin2' : 'iso8859_2', 'csisolatin3' : 'iso8859_3', 'iso_8859_3' : 'iso8859_3', 'iso_8859_3_1988' : 'iso8859_3', 'iso_ir_109' : 'iso8859_3', 'l3' : 'iso8859_3', 'latin3' : 'iso8859_3', 'csisolatin4' : 'iso8859_4', 'iso_8859_4' : 'iso8859_4', 'iso_8859_4_1988' : 'iso8859_4', 'iso_ir_110' : 'iso8859_4', 'l4' : 'iso8859_4', 'latin4' : 'iso8859_4', 'csisolatincyrillic' : 'iso8859_5', 'cyrillic' : 'iso8859_5', 'iso_8859_5' : 'iso8859_5', 'iso_8859_5_1988' : 'iso8859_5', 'iso_ir_144' : 'iso8859_5', 'arabic' : 'iso8859_6', 'asmo_708' : 'iso8859_6', 'csisolatinarabic' : 'iso8859_6', 'ecma_114' : 'iso8859_6', 'iso_8859_6' : 'iso8859_6', 'iso_8859_6_1987' : 'iso8859_6', 'iso_ir_127' : 'iso8859_6', 'csisolatingreek' : 'iso8859_7', 'ecma_118' : 'iso8859_7', 'elot_928' : 'iso8859_7', 'greek' : 'iso8859_7', 'greek8' : 'iso8859_7', 'iso_8859_7' : 'iso8859_7', 'iso_8859_7_1987' : 'iso8859_7', 'iso_ir_126' : 'iso8859_7', 'csisolatinhebrew' : 'iso8859_8', 'hebrew' : 'iso8859_8', 'iso_8859_8' : 'iso8859_8', 'iso_8859_8_1988' : 'iso8859_8', 'iso_ir_138' : 'iso8859_8', 'csisolatin5' : 'iso8859_9', 'iso_8859_9' : 'iso8859_9', 'iso_8859_9_1989' : 'iso8859_9', 'iso_ir_148' : 'iso8859_9', 'l5' : 'iso8859_9', 'latin5' : 'iso8859_9', 'cp1361' : 'johab', 'ms1361' : 'johab', 'cskoi8r' : 'koi8_r', 'kz_1048' : 'kz1048', 'rk1048' : 'kz1048', 'strk1048_2002' : 'kz1048', '8859' : 'latin_1', 'cp819' : 'latin_1', 'csisolatin1' : 'latin_1', 'ibm819' : 'latin_1', 'iso8859' : 'latin_1', 'iso8859_1' : 'latin_1', 'iso_8859_1' : 'latin_1', 'iso_8859_1_1987' : 'latin_1', 'iso_ir_100' : 'latin_1', 'l1' : 'latin_1', 'latin' : 'latin_1', 'latin1' : 'latin_1', 'maccyrillic' : 'mac_cyrillic', 'macgreek' : 'mac_greek', 'maciceland' : 'mac_iceland', 'maccentraleurope' : 'mac_latin2', 'mac_centeuro' : 'mac_latin2', 'maclatin2' : 'mac_latin2', 'macintosh' : 'mac_roman', 'macroman' : 'mac_roman', 'macturkish' : 'mac_turkish', 'ansi' : 'mbcs', 'dbcs' : 'mbcs', 'csptcp154' : 'ptcp154', 'pt154' : 'ptcp154', 'cp154' : 'ptcp154', 'cyrillic_asian' : 'ptcp154', 'quopri' : 'quopri_codec', 'quoted_printable' : 'quopri_codec', 'quotedprintable' : 'quopri_codec', 'rot13' : 'rot_13', 'csshiftjis' : 'shift_jis', 'shiftjis' : 'shift_jis', 'sjis' : 'shift_jis', 's_jis' : 'shift_jis', 'shiftjis2004' : 'shift_jis_2004', 'sjis_2004' : 'shift_jis_2004', 's_jis_2004' : 'shift_jis_2004', 'shiftjisx0213' : 'shift_jisx0213', 'sjisx0213' : 'shift_jisx0213', 's_jisx0213' : 'shift_jisx0213', 'tis620' : 'tis_620', 'tis_620_0' : 'tis_620', 'tis_620_2529_0' : 'tis_620', 'tis_620_2529_1' : 'tis_620', 'iso_ir_166' : 'tis_620', 'u16' : 'utf_16', 'utf16' : 'utf_16', 'unicodebigunmarked' : 'utf_16_be', 'utf_16be' : 'utf_16_be', 'unicodelittleunmarked' : 'utf_16_le', 'utf_16le' : 'utf_16_le', 'u32' : 'utf_32', 'utf32' : 'utf_32', 'utf_32be' : 'utf_32_be', 'utf_32le' : 'utf_32_le', 'u7' : 'utf_7', 'utf7' : 'utf_7', 'unicode_1_1_utf_7' : 'utf_7', 'u8' : 'utf_8', 'utf' : 'utf_8', 'utf8' : 'utf_8', 'utf8_ucs2' : 'utf_8', 'utf8_ucs4' : 'utf_8', 'cp65001' : 'utf_8', 'uu' : 'uu_codec', 'zip' : 'zlib_codec', 'zlib' : 'zlib_codec', 'x_mac_japanese' : 'shift_jis', 'x_mac_korean' : 'euc_kr', 'x_mac_simp_chinese' : 'gb2312', 'x_mac_trad_chinese' : 'big5', }
python ascii codec written by marcandre lemburg mallemburg com c cnri no warranty codec apis note binding these as c functions will result in the class not converting them to methods this is intended encodings module api codec apis note binding these as c functions will result in the class not converting them to methods this is intended encodings module api
import codecs class Codec(codecs.Codec): encode = codecs.ascii_encode decode = codecs.ascii_decode class IncrementalEncoder(codecs.IncrementalEncoder): def encode(self, input, final=False): return codecs.ascii_encode(input, self.errors)[0] class IncrementalDecoder(codecs.IncrementalDecoder): def decode(self, input, final=False): return codecs.ascii_decode(input, self.errors)[0] class StreamWriter(Codec,codecs.StreamWriter): pass class StreamReader(Codec,codecs.StreamReader): pass class StreamConverter(StreamWriter,StreamReader): encode = codecs.ascii_decode decode = codecs.ascii_encode def getregentry(): return codecs.CodecInfo( name='ascii', encode=Codec.encode, decode=Codec.decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamwriter=StreamWriter, streamreader=StreamReader, )
python base64codec codec base64 content transfer encoding this codec deencodes from bytes to bytes written by marcandre lemburg mallemburg com codec apis encodings module api codec apis encodings module api
import codecs import base64 def base64_encode(input, errors='strict'): assert errors == 'strict' return (base64.encodebytes(input), len(input)) def base64_decode(input, errors='strict'): assert errors == 'strict' return (base64.decodebytes(input), len(input)) class Codec(codecs.Codec): def encode(self, input, errors='strict'): return base64_encode(input, errors) def decode(self, input, errors='strict'): return base64_decode(input, errors) class IncrementalEncoder(codecs.IncrementalEncoder): def encode(self, input, final=False): assert self.errors == 'strict' return base64.encodebytes(input) class IncrementalDecoder(codecs.IncrementalDecoder): def decode(self, input, final=False): assert self.errors == 'strict' return base64.decodebytes(input) class StreamWriter(Codec, codecs.StreamWriter): charbuffertype = bytes class StreamReader(Codec, codecs.StreamReader): charbuffertype = bytes def getregentry(): return codecs.CodecInfo( name='base64', encode=base64_encode, decode=base64_decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamwriter=StreamWriter, streamreader=StreamReader, _is_text_encoding=False, )
big5 py python unicode codec for big5 written by hyeshik chang perkyfreebsd org big5 py python unicode codec for big5 written by hye shik chang perky freebsd org
import _codecs_tw, codecs import _multibytecodec as mbc codec = _codecs_tw.getcodec('big5') class Codec(codecs.Codec): encode = codec.encode decode = codec.decode class IncrementalEncoder(mbc.MultibyteIncrementalEncoder, codecs.IncrementalEncoder): codec = codec class IncrementalDecoder(mbc.MultibyteIncrementalDecoder, codecs.IncrementalDecoder): codec = codec class StreamReader(Codec, mbc.MultibyteStreamReader, codecs.StreamReader): codec = codec class StreamWriter(Codec, mbc.MultibyteStreamWriter, codecs.StreamWriter): codec = codec def getregentry(): return codecs.CodecInfo( name='big5', encode=Codec().encode, decode=Codec().decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamreader=StreamReader, streamwriter=StreamWriter, )
big5hkscs py python unicode codec for big5hkscs written by hyeshik chang perkyfreebsd org big5hkscs py python unicode codec for big5hkscs written by hye shik chang perky freebsd org
import _codecs_hk, codecs import _multibytecodec as mbc codec = _codecs_hk.getcodec('big5hkscs') class Codec(codecs.Codec): encode = codec.encode decode = codec.decode class IncrementalEncoder(mbc.MultibyteIncrementalEncoder, codecs.IncrementalEncoder): codec = codec class IncrementalDecoder(mbc.MultibyteIncrementalDecoder, codecs.IncrementalDecoder): codec = codec class StreamReader(Codec, mbc.MultibyteStreamReader, codecs.StreamReader): codec = codec class StreamWriter(Codec, mbc.MultibyteStreamWriter, codecs.StreamWriter): codec = codec def getregentry(): return codecs.CodecInfo( name='big5hkscs', encode=Codec().encode, decode=Codec().decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamreader=StreamReader, streamwriter=StreamWriter, )
python bz2codec codec bz2 compression encoding this codec deencodes from bytes to bytes and is therefore usable with bytes transform and bytes untransform adapted by raymond hettinger from zlibcodec py which was written by marcandre lemburg mallemburg com codec apis encodings module api this codec needs the optional bz2 module codec apis encodings module api
import codecs import bz2 def bz2_encode(input, errors='strict'): assert errors == 'strict' return (bz2.compress(input), len(input)) def bz2_decode(input, errors='strict'): assert errors == 'strict' return (bz2.decompress(input), len(input)) class Codec(codecs.Codec): def encode(self, input, errors='strict'): return bz2_encode(input, errors) def decode(self, input, errors='strict'): return bz2_decode(input, errors) class IncrementalEncoder(codecs.IncrementalEncoder): def __init__(self, errors='strict'): assert errors == 'strict' self.errors = errors self.compressobj = bz2.BZ2Compressor() def encode(self, input, final=False): if final: c = self.compressobj.compress(input) return c + self.compressobj.flush() else: return self.compressobj.compress(input) def reset(self): self.compressobj = bz2.BZ2Compressor() class IncrementalDecoder(codecs.IncrementalDecoder): def __init__(self, errors='strict'): assert errors == 'strict' self.errors = errors self.decompressobj = bz2.BZ2Decompressor() def decode(self, input, final=False): try: return self.decompressobj.decompress(input) except EOFError: return '' def reset(self): self.decompressobj = bz2.BZ2Decompressor() class StreamWriter(Codec, codecs.StreamWriter): charbuffertype = bytes class StreamReader(Codec, codecs.StreamReader): charbuffertype = bytes def getregentry(): return codecs.CodecInfo( name="bz2", encode=bz2_encode, decode=bz2_decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamwriter=StreamWriter, streamreader=StreamReader, _is_text_encoding=False, )
generic python character mapping codec use this codec directly rather than through the automatic conversion mechanisms supplied by unicode and encode written by marcandre lemburg mallemburg com c cnri no warranty codec apis note binding these as c functions will result in the class not converting them to methods this is intended encodings module api codec apis note binding these as c functions will result in the class not converting them to methods this is intended encodings module api
import codecs class Codec(codecs.Codec): encode = codecs.charmap_encode decode = codecs.charmap_decode class IncrementalEncoder(codecs.IncrementalEncoder): def __init__(self, errors='strict', mapping=None): codecs.IncrementalEncoder.__init__(self, errors) self.mapping = mapping def encode(self, input, final=False): return codecs.charmap_encode(input, self.errors, self.mapping)[0] class IncrementalDecoder(codecs.IncrementalDecoder): def __init__(self, errors='strict', mapping=None): codecs.IncrementalDecoder.__init__(self, errors) self.mapping = mapping def decode(self, input, final=False): return codecs.charmap_decode(input, self.errors, self.mapping)[0] class StreamWriter(Codec,codecs.StreamWriter): def __init__(self,stream,errors='strict',mapping=None): codecs.StreamWriter.__init__(self,stream,errors) self.mapping = mapping def encode(self,input,errors='strict'): return Codec.encode(input,errors,self.mapping) class StreamReader(Codec,codecs.StreamReader): def __init__(self,stream,errors='strict',mapping=None): codecs.StreamReader.__init__(self,stream,errors) self.mapping = mapping def decode(self,input,errors='strict'): return Codec.decode(input,errors,self.mapping) def getregentry(): return codecs.CodecInfo( name='charmap', encode=Codec.encode, decode=Codec.decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamwriter=StreamWriter, streamreader=StreamReader, )
cp932 py python unicode codec for cp932 written by hyeshik chang perkyfreebsd org cp932 py python unicode codec for cp932 written by hye shik chang perky freebsd org
import _codecs_jp, codecs import _multibytecodec as mbc codec = _codecs_jp.getcodec('cp932') class Codec(codecs.Codec): encode = codec.encode decode = codec.decode class IncrementalEncoder(mbc.MultibyteIncrementalEncoder, codecs.IncrementalEncoder): codec = codec class IncrementalDecoder(mbc.MultibyteIncrementalDecoder, codecs.IncrementalDecoder): codec = codec class StreamReader(Codec, mbc.MultibyteStreamReader, codecs.StreamReader): codec = codec class StreamWriter(Codec, mbc.MultibyteStreamWriter, codecs.StreamWriter): codec = codec def getregentry(): return codecs.CodecInfo( name='cp932', encode=Codec().encode, decode=Codec().decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamreader=StreamReader, streamwriter=StreamWriter, )
cp949 py python unicode codec for cp949 written by hyeshik chang perkyfreebsd org cp949 py python unicode codec for cp949 written by hye shik chang perky freebsd org
import _codecs_kr, codecs import _multibytecodec as mbc codec = _codecs_kr.getcodec('cp949') class Codec(codecs.Codec): encode = codec.encode decode = codec.decode class IncrementalEncoder(mbc.MultibyteIncrementalEncoder, codecs.IncrementalEncoder): codec = codec class IncrementalDecoder(mbc.MultibyteIncrementalDecoder, codecs.IncrementalDecoder): codec = codec class StreamReader(Codec, mbc.MultibyteStreamReader, codecs.StreamReader): codec = codec class StreamWriter(Codec, mbc.MultibyteStreamWriter, codecs.StreamWriter): codec = codec def getregentry(): return codecs.CodecInfo( name='cp949', encode=Codec().encode, decode=Codec().decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamreader=StreamReader, streamwriter=StreamWriter, )
cp950 py python unicode codec for cp950 written by hyeshik chang perkyfreebsd org cp950 py python unicode codec for cp950 written by hye shik chang perky freebsd org
import _codecs_tw, codecs import _multibytecodec as mbc codec = _codecs_tw.getcodec('cp950') class Codec(codecs.Codec): encode = codec.encode decode = codec.decode class IncrementalEncoder(mbc.MultibyteIncrementalEncoder, codecs.IncrementalEncoder): codec = codec class IncrementalDecoder(mbc.MultibyteIncrementalDecoder, codecs.IncrementalDecoder): codec = codec class StreamReader(Codec, mbc.MultibyteStreamReader, codecs.StreamReader): codec = codec class StreamWriter(Codec, mbc.MultibyteStreamWriter, codecs.StreamWriter): codec = codec def getregentry(): return codecs.CodecInfo( name='cp950', encode=Codec().encode, decode=Codec().decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamreader=StreamReader, streamwriter=StreamWriter, )
eucjis2004 py python unicode codec for eucjis2004 written by hyeshik chang perkyfreebsd org euc_jis_2004 py python unicode codec for euc_jis_2004 written by hye shik chang perky freebsd org
import _codecs_jp, codecs import _multibytecodec as mbc codec = _codecs_jp.getcodec('euc_jis_2004') class Codec(codecs.Codec): encode = codec.encode decode = codec.decode class IncrementalEncoder(mbc.MultibyteIncrementalEncoder, codecs.IncrementalEncoder): codec = codec class IncrementalDecoder(mbc.MultibyteIncrementalDecoder, codecs.IncrementalDecoder): codec = codec class StreamReader(Codec, mbc.MultibyteStreamReader, codecs.StreamReader): codec = codec class StreamWriter(Codec, mbc.MultibyteStreamWriter, codecs.StreamWriter): codec = codec def getregentry(): return codecs.CodecInfo( name='euc_jis_2004', encode=Codec().encode, decode=Codec().decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamreader=StreamReader, streamwriter=StreamWriter, )
eucjisx0213 py python unicode codec for eucjisx0213 written by hyeshik chang perkyfreebsd org euc_jisx0213 py python unicode codec for euc_jisx0213 written by hye shik chang perky freebsd org
import _codecs_jp, codecs import _multibytecodec as mbc codec = _codecs_jp.getcodec('euc_jisx0213') class Codec(codecs.Codec): encode = codec.encode decode = codec.decode class IncrementalEncoder(mbc.MultibyteIncrementalEncoder, codecs.IncrementalEncoder): codec = codec class IncrementalDecoder(mbc.MultibyteIncrementalDecoder, codecs.IncrementalDecoder): codec = codec class StreamReader(Codec, mbc.MultibyteStreamReader, codecs.StreamReader): codec = codec class StreamWriter(Codec, mbc.MultibyteStreamWriter, codecs.StreamWriter): codec = codec def getregentry(): return codecs.CodecInfo( name='euc_jisx0213', encode=Codec().encode, decode=Codec().decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamreader=StreamReader, streamwriter=StreamWriter, )
eucjp py python unicode codec for eucjp written by hyeshik chang perkyfreebsd org euc_jp py python unicode codec for euc_jp written by hye shik chang perky freebsd org
import _codecs_jp, codecs import _multibytecodec as mbc codec = _codecs_jp.getcodec('euc_jp') class Codec(codecs.Codec): encode = codec.encode decode = codec.decode class IncrementalEncoder(mbc.MultibyteIncrementalEncoder, codecs.IncrementalEncoder): codec = codec class IncrementalDecoder(mbc.MultibyteIncrementalDecoder, codecs.IncrementalDecoder): codec = codec class StreamReader(Codec, mbc.MultibyteStreamReader, codecs.StreamReader): codec = codec class StreamWriter(Codec, mbc.MultibyteStreamWriter, codecs.StreamWriter): codec = codec def getregentry(): return codecs.CodecInfo( name='euc_jp', encode=Codec().encode, decode=Codec().decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamreader=StreamReader, streamwriter=StreamWriter, )
euckr py python unicode codec for euckr written by hyeshik chang perkyfreebsd org euc_kr py python unicode codec for euc_kr written by hye shik chang perky freebsd org
import _codecs_kr, codecs import _multibytecodec as mbc codec = _codecs_kr.getcodec('euc_kr') class Codec(codecs.Codec): encode = codec.encode decode = codec.decode class IncrementalEncoder(mbc.MultibyteIncrementalEncoder, codecs.IncrementalEncoder): codec = codec class IncrementalDecoder(mbc.MultibyteIncrementalDecoder, codecs.IncrementalDecoder): codec = codec class StreamReader(Codec, mbc.MultibyteStreamReader, codecs.StreamReader): codec = codec class StreamWriter(Codec, mbc.MultibyteStreamWriter, codecs.StreamWriter): codec = codec def getregentry(): return codecs.CodecInfo( name='euc_kr', encode=Codec().encode, decode=Codec().decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamreader=StreamReader, streamwriter=StreamWriter, )
gb18030 py python unicode codec for gb18030 written by hyeshik chang perkyfreebsd org gb18030 py python unicode codec for gb18030 written by hye shik chang perky freebsd org
import _codecs_cn, codecs import _multibytecodec as mbc codec = _codecs_cn.getcodec('gb18030') class Codec(codecs.Codec): encode = codec.encode decode = codec.decode class IncrementalEncoder(mbc.MultibyteIncrementalEncoder, codecs.IncrementalEncoder): codec = codec class IncrementalDecoder(mbc.MultibyteIncrementalDecoder, codecs.IncrementalDecoder): codec = codec class StreamReader(Codec, mbc.MultibyteStreamReader, codecs.StreamReader): codec = codec class StreamWriter(Codec, mbc.MultibyteStreamWriter, codecs.StreamWriter): codec = codec def getregentry(): return codecs.CodecInfo( name='gb18030', encode=Codec().encode, decode=Codec().decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamreader=StreamReader, streamwriter=StreamWriter, )
gb2312 py python unicode codec for gb2312 written by hyeshik chang perkyfreebsd org gb2312 py python unicode codec for gb2312 written by hye shik chang perky freebsd org
import _codecs_cn, codecs import _multibytecodec as mbc codec = _codecs_cn.getcodec('gb2312') class Codec(codecs.Codec): encode = codec.encode decode = codec.decode class IncrementalEncoder(mbc.MultibyteIncrementalEncoder, codecs.IncrementalEncoder): codec = codec class IncrementalDecoder(mbc.MultibyteIncrementalDecoder, codecs.IncrementalDecoder): codec = codec class StreamReader(Codec, mbc.MultibyteStreamReader, codecs.StreamReader): codec = codec class StreamWriter(Codec, mbc.MultibyteStreamWriter, codecs.StreamWriter): codec = codec def getregentry(): return codecs.CodecInfo( name='gb2312', encode=Codec().encode, decode=Codec().decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamreader=StreamReader, streamwriter=StreamWriter, )
gbk py python unicode codec for gbk written by hyeshik chang perkyfreebsd org gbk py python unicode codec for gbk written by hye shik chang perky freebsd org
import _codecs_cn, codecs import _multibytecodec as mbc codec = _codecs_cn.getcodec('gbk') class Codec(codecs.Codec): encode = codec.encode decode = codec.decode class IncrementalEncoder(mbc.MultibyteIncrementalEncoder, codecs.IncrementalEncoder): codec = codec class IncrementalDecoder(mbc.MultibyteIncrementalDecoder, codecs.IncrementalDecoder): codec = codec class StreamReader(Codec, mbc.MultibyteStreamReader, codecs.StreamReader): codec = codec class StreamWriter(Codec, mbc.MultibyteStreamWriter, codecs.StreamWriter): codec = codec def getregentry(): return codecs.CodecInfo( name='gbk', encode=Codec().encode, decode=Codec().decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamreader=StreamReader, streamwriter=StreamWriter, )
python hexcodec codec 2digit hex content transfer encoding this codec deencodes from bytes to bytes written by marcandre lemburg mallemburg com codec apis encodings module api codec apis encodings module api
import codecs import binascii def hex_encode(input, errors='strict'): assert errors == 'strict' return (binascii.b2a_hex(input), len(input)) def hex_decode(input, errors='strict'): assert errors == 'strict' return (binascii.a2b_hex(input), len(input)) class Codec(codecs.Codec): def encode(self, input, errors='strict'): return hex_encode(input, errors) def decode(self, input, errors='strict'): return hex_decode(input, errors) class IncrementalEncoder(codecs.IncrementalEncoder): def encode(self, input, final=False): assert self.errors == 'strict' return binascii.b2a_hex(input) class IncrementalDecoder(codecs.IncrementalDecoder): def decode(self, input, final=False): assert self.errors == 'strict' return binascii.a2b_hex(input) class StreamWriter(Codec, codecs.StreamWriter): charbuffertype = bytes class StreamReader(Codec, codecs.StreamReader): charbuffertype = bytes def getregentry(): return codecs.CodecInfo( name='hex', encode=hex_encode, decode=hex_decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamwriter=StreamWriter, streamreader=StreamReader, _is_text_encoding=False, )
hz py python unicode codec for hz written by hyeshik chang perkyfreebsd org hz py python unicode codec for hz written by hye shik chang perky freebsd org
import _codecs_cn, codecs import _multibytecodec as mbc codec = _codecs_cn.getcodec('hz') class Codec(codecs.Codec): encode = codec.encode decode = codec.decode class IncrementalEncoder(mbc.MultibyteIncrementalEncoder, codecs.IncrementalEncoder): codec = codec class IncrementalDecoder(mbc.MultibyteIncrementalDecoder, codecs.IncrementalDecoder): codec = codec class StreamReader(Codec, mbc.MultibyteStreamReader, codecs.StreamReader): codec = codec class StreamWriter(Codec, mbc.MultibyteStreamWriter, codecs.StreamWriter): codec = codec def getregentry(): return codecs.CodecInfo( name='hz', encode=Codec().encode, decode=Codec().decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamreader=StreamReader, streamwriter=StreamWriter, )
this module implements the rfcs 3490 idna and 3491 nameprep idna section 3 1 idna section 5 this assumes query strings so allowunassigned is true map map to nothing normalize prohibit check bidi there is a randal char in the string must perform further tests 1 the characters in section 5 8 must be prohibited this is table c 8 which was already checked 2 if a string contains any randalcat character the string must not contain any lcat character 3 if a string contains any randalcat character a randalcat character must be the first character of the string and a randalcat character must be the last character of the string step 1 try ascii skip to step 3 usestd3asciirules is false so skip to step 8 step 2 nameprep step 3 usestd3asciirules is false step 4 try ascii skip to step 8 step 5 check ace prefix step 6 encode with punycode step 7 prepend ace prefix step 8 check size protection from https github compythoncpythonissues98433 https datatracker ietf orgdochtmlrfc5894section6 doesn t specify a label size limit prior to nameprep but having one makes practical sense this leaves ample room for nameprep to remove nothing characters per https www rfceditor orgrfcrfc3454section3 1 while still preventing us from wasting time decoding a big thing that ll just hit the actual 63 length limit in step 6 step 1 check for ascii step 2 perform nameprep it doesn t say this but apparently it should be ascii now step 3 check for ace prefix step 4 remove ace prefix step 5 decode using punycode step 6 apply toascii step 7 compare the result of step 6 with the one of step 3 label2 will already be in lower case step 8 return the result of step 5 codec apis idna is quite clear that implementations must be strict ascii name fast path join with u002e idna allows decoding to operate on unicode strings too xxx obviously wrong see 3232 fast path idna is quite clear that implementations must be strict keep potentially unfinished label until the next call join with u002e idna allows decoding to operate on unicode strings too must be ascii string keep potentially unfinished label until the next call encodings module api this module implements the rfcs 3490 idna and 3491 nameprep idna section 3 1 idna section 5 this assumes query strings so allowunassigned is true map map to nothing normalize prohibit check bidi there is a randal char in the string must perform further tests 1 the characters in section 5 8 must be prohibited this is table c 8 which was already checked 2 if a string contains any randalcat character the string must not contain any lcat character 3 if a string contains any randalcat character a randalcat character must be the first character of the string and a randalcat character must be the last character of the string step 1 try ascii skip to step 3 usestd3asciirules is false so skip to step 8 step 2 nameprep step 3 usestd3asciirules is false step 4 try ascii skip to step 8 step 5 check ace prefix step 6 encode with punycode step 7 prepend ace prefix step 8 check size protection from https github com python cpython issues 98433 https datatracker ietf org doc html rfc5894 section 6 doesn t specify a label size limit prior to nameprep but having one makes practical sense this leaves ample room for nameprep to remove nothing characters per https www rfc editor org rfc rfc3454 section 3 1 while still preventing us from wasting time decoding a big thing that ll just hit the actual 63 length limit in step 6 step 1 check for ascii step 2 perform nameprep it doesn t say this but apparently it should be ascii now step 3 check for ace prefix step 4 remove ace prefix step 5 decode using punycode step 6 apply toascii step 7 compare the result of step 6 with the one of step 3 label2 will already be in lower case step 8 return the result of step 5 codec apis idna is quite clear that implementations must be strict ascii name fast path join with u 002e idna allows decoding to operate on unicode strings too xxx obviously wrong see 3232 fast path idna is quite clear that implementations must be strict keep potentially unfinished label until the next call join with u 002e idna allows decoding to operate on unicode strings too must be ascii string keep potentially unfinished label until the next call encodings module api
import stringprep, re, codecs from unicodedata import ucd_3_2_0 as unicodedata dots = re.compile("[\u002E\u3002\uFF0E\uFF61]") ace_prefix = b"xn--" sace_prefix = "xn--" def nameprep(label): newlabel = [] for c in label: if stringprep.in_table_b1(c): continue newlabel.append(stringprep.map_table_b2(c)) label = "".join(newlabel) label = unicodedata.normalize("NFKC", label) for c in label: if stringprep.in_table_c12(c) or \ stringprep.in_table_c22(c) or \ stringprep.in_table_c3(c) or \ stringprep.in_table_c4(c) or \ stringprep.in_table_c5(c) or \ stringprep.in_table_c6(c) or \ stringprep.in_table_c7(c) or \ stringprep.in_table_c8(c) or \ stringprep.in_table_c9(c): raise UnicodeError("Invalid character %r" % c) RandAL = [stringprep.in_table_d1(x) for x in label] if any(RandAL): if any(stringprep.in_table_d2(x) for x in label): raise UnicodeError("Violation of BIDI requirement 2") if not RandAL[0] or not RandAL[-1]: raise UnicodeError("Violation of BIDI requirement 3") return label def ToASCII(label): try: label = label.encode("ascii") except UnicodeError: pass else: if 0 < len(label) < 64: return label raise UnicodeError("label empty or too long") label = nameprep(label) try: label = label.encode("ascii") except UnicodeError: pass else: if 0 < len(label) < 64: return label raise UnicodeError("label empty or too long") if label.startswith(sace_prefix): raise UnicodeError("Label starts with ACE prefix") label = label.encode("punycode") label = ace_prefix + label if 0 < len(label) < 64: return label raise UnicodeError("label empty or too long") def ToUnicode(label): if len(label) > 1024: raise UnicodeError("label way too long") if isinstance(label, bytes): pure_ascii = True else: try: label = label.encode("ascii") pure_ascii = True except UnicodeError: pure_ascii = False if not pure_ascii: label = nameprep(label) try: label = label.encode("ascii") except UnicodeError: raise UnicodeError("Invalid character in IDN label") if not label.startswith(ace_prefix): return str(label, "ascii") label1 = label[len(ace_prefix):] result = label1.decode("punycode") label2 = ToASCII(result) if str(label, "ascii").lower() != str(label2, "ascii"): raise UnicodeError("IDNA does not round-trip", label, label2) return result class Codec(codecs.Codec): def encode(self, input, errors='strict'): if errors != 'strict': raise UnicodeError("unsupported error handling "+errors) if not input: return b'', 0 try: result = input.encode('ascii') except UnicodeEncodeError: pass else: labels = result.split(b'.') for label in labels[:-1]: if not (0 < len(label) < 64): raise UnicodeError("label empty or too long") if len(labels[-1]) >= 64: raise UnicodeError("label too long") return result, len(input) result = bytearray() labels = dots.split(input) if labels and not labels[-1]: trailing_dot = b'.' del labels[-1] else: trailing_dot = b'' for label in labels: if result: result.extend(b'.') result.extend(ToASCII(label)) return bytes(result+trailing_dot), len(input) def decode(self, input, errors='strict'): if errors != 'strict': raise UnicodeError("Unsupported error handling "+errors) if not input: return "", 0 if not isinstance(input, bytes): input = bytes(input) if ace_prefix not in input: try: return input.decode('ascii'), len(input) except UnicodeDecodeError: pass labels = input.split(b".") if labels and len(labels[-1]) == 0: trailing_dot = '.' del labels[-1] else: trailing_dot = '' result = [] for label in labels: result.append(ToUnicode(label)) return ".".join(result)+trailing_dot, len(input) class IncrementalEncoder(codecs.BufferedIncrementalEncoder): def _buffer_encode(self, input, errors, final): if errors != 'strict': raise UnicodeError("unsupported error handling "+errors) if not input: return (b'', 0) labels = dots.split(input) trailing_dot = b'' if labels: if not labels[-1]: trailing_dot = b'.' del labels[-1] elif not final: del labels[-1] if labels: trailing_dot = b'.' result = bytearray() size = 0 for label in labels: if size: result.extend(b'.') size += 1 result.extend(ToASCII(label)) size += len(label) result += trailing_dot size += len(trailing_dot) return (bytes(result), size) class IncrementalDecoder(codecs.BufferedIncrementalDecoder): def _buffer_decode(self, input, errors, final): if errors != 'strict': raise UnicodeError("Unsupported error handling "+errors) if not input: return ("", 0) if isinstance(input, str): labels = dots.split(input) else: input = str(input, "ascii") labels = input.split(".") trailing_dot = '' if labels: if not labels[-1]: trailing_dot = '.' del labels[-1] elif not final: del labels[-1] if labels: trailing_dot = '.' result = [] size = 0 for label in labels: result.append(ToUnicode(label)) if size: size += 1 size += len(label) result = ".".join(result) + trailing_dot size += len(trailing_dot) return (result, size) class StreamWriter(Codec,codecs.StreamWriter): pass class StreamReader(Codec,codecs.StreamReader): pass def getregentry(): return codecs.CodecInfo( name='idna', encode=Codec().encode, decode=Codec().decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamwriter=StreamWriter, streamreader=StreamReader, )
iso2022jp py python unicode codec for iso2022jp written by hyeshik chang perkyfreebsd org iso2022_jp py python unicode codec for iso2022_jp written by hye shik chang perky freebsd org
import _codecs_iso2022, codecs import _multibytecodec as mbc codec = _codecs_iso2022.getcodec('iso2022_jp') class Codec(codecs.Codec): encode = codec.encode decode = codec.decode class IncrementalEncoder(mbc.MultibyteIncrementalEncoder, codecs.IncrementalEncoder): codec = codec class IncrementalDecoder(mbc.MultibyteIncrementalDecoder, codecs.IncrementalDecoder): codec = codec class StreamReader(Codec, mbc.MultibyteStreamReader, codecs.StreamReader): codec = codec class StreamWriter(Codec, mbc.MultibyteStreamWriter, codecs.StreamWriter): codec = codec def getregentry(): return codecs.CodecInfo( name='iso2022_jp', encode=Codec().encode, decode=Codec().decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamreader=StreamReader, streamwriter=StreamWriter, )
iso2022jp1 py python unicode codec for iso2022jp1 written by hyeshik chang perkyfreebsd org iso2022_jp_1 py python unicode codec for iso2022_jp_1 written by hye shik chang perky freebsd org
import _codecs_iso2022, codecs import _multibytecodec as mbc codec = _codecs_iso2022.getcodec('iso2022_jp_1') class Codec(codecs.Codec): encode = codec.encode decode = codec.decode class IncrementalEncoder(mbc.MultibyteIncrementalEncoder, codecs.IncrementalEncoder): codec = codec class IncrementalDecoder(mbc.MultibyteIncrementalDecoder, codecs.IncrementalDecoder): codec = codec class StreamReader(Codec, mbc.MultibyteStreamReader, codecs.StreamReader): codec = codec class StreamWriter(Codec, mbc.MultibyteStreamWriter, codecs.StreamWriter): codec = codec def getregentry(): return codecs.CodecInfo( name='iso2022_jp_1', encode=Codec().encode, decode=Codec().decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamreader=StreamReader, streamwriter=StreamWriter, )
iso2022jp2 py python unicode codec for iso2022jp2 written by hyeshik chang perkyfreebsd org iso2022_jp_2 py python unicode codec for iso2022_jp_2 written by hye shik chang perky freebsd org
import _codecs_iso2022, codecs import _multibytecodec as mbc codec = _codecs_iso2022.getcodec('iso2022_jp_2') class Codec(codecs.Codec): encode = codec.encode decode = codec.decode class IncrementalEncoder(mbc.MultibyteIncrementalEncoder, codecs.IncrementalEncoder): codec = codec class IncrementalDecoder(mbc.MultibyteIncrementalDecoder, codecs.IncrementalDecoder): codec = codec class StreamReader(Codec, mbc.MultibyteStreamReader, codecs.StreamReader): codec = codec class StreamWriter(Codec, mbc.MultibyteStreamWriter, codecs.StreamWriter): codec = codec def getregentry(): return codecs.CodecInfo( name='iso2022_jp_2', encode=Codec().encode, decode=Codec().decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamreader=StreamReader, streamwriter=StreamWriter, )
iso2022jp2004 py python unicode codec for iso2022jp2004 written by hyeshik chang perkyfreebsd org iso2022_jp_2004 py python unicode codec for iso2022_jp_2004 written by hye shik chang perky freebsd org
import _codecs_iso2022, codecs import _multibytecodec as mbc codec = _codecs_iso2022.getcodec('iso2022_jp_2004') class Codec(codecs.Codec): encode = codec.encode decode = codec.decode class IncrementalEncoder(mbc.MultibyteIncrementalEncoder, codecs.IncrementalEncoder): codec = codec class IncrementalDecoder(mbc.MultibyteIncrementalDecoder, codecs.IncrementalDecoder): codec = codec class StreamReader(Codec, mbc.MultibyteStreamReader, codecs.StreamReader): codec = codec class StreamWriter(Codec, mbc.MultibyteStreamWriter, codecs.StreamWriter): codec = codec def getregentry(): return codecs.CodecInfo( name='iso2022_jp_2004', encode=Codec().encode, decode=Codec().decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamreader=StreamReader, streamwriter=StreamWriter, )
iso2022jp3 py python unicode codec for iso2022jp3 written by hyeshik chang perkyfreebsd org iso2022_jp_3 py python unicode codec for iso2022_jp_3 written by hye shik chang perky freebsd org
import _codecs_iso2022, codecs import _multibytecodec as mbc codec = _codecs_iso2022.getcodec('iso2022_jp_3') class Codec(codecs.Codec): encode = codec.encode decode = codec.decode class IncrementalEncoder(mbc.MultibyteIncrementalEncoder, codecs.IncrementalEncoder): codec = codec class IncrementalDecoder(mbc.MultibyteIncrementalDecoder, codecs.IncrementalDecoder): codec = codec class StreamReader(Codec, mbc.MultibyteStreamReader, codecs.StreamReader): codec = codec class StreamWriter(Codec, mbc.MultibyteStreamWriter, codecs.StreamWriter): codec = codec def getregentry(): return codecs.CodecInfo( name='iso2022_jp_3', encode=Codec().encode, decode=Codec().decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamreader=StreamReader, streamwriter=StreamWriter, )
iso2022jpext py python unicode codec for iso2022jpext written by hyeshik chang perkyfreebsd org iso2022_jp_ext py python unicode codec for iso2022_jp_ext written by hye shik chang perky freebsd org
import _codecs_iso2022, codecs import _multibytecodec as mbc codec = _codecs_iso2022.getcodec('iso2022_jp_ext') class Codec(codecs.Codec): encode = codec.encode decode = codec.decode class IncrementalEncoder(mbc.MultibyteIncrementalEncoder, codecs.IncrementalEncoder): codec = codec class IncrementalDecoder(mbc.MultibyteIncrementalDecoder, codecs.IncrementalDecoder): codec = codec class StreamReader(Codec, mbc.MultibyteStreamReader, codecs.StreamReader): codec = codec class StreamWriter(Codec, mbc.MultibyteStreamWriter, codecs.StreamWriter): codec = codec def getregentry(): return codecs.CodecInfo( name='iso2022_jp_ext', encode=Codec().encode, decode=Codec().decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamreader=StreamReader, streamwriter=StreamWriter, )
iso2022kr py python unicode codec for iso2022kr written by hyeshik chang perkyfreebsd org iso2022_kr py python unicode codec for iso2022_kr written by hye shik chang perky freebsd org
import _codecs_iso2022, codecs import _multibytecodec as mbc codec = _codecs_iso2022.getcodec('iso2022_kr') class Codec(codecs.Codec): encode = codec.encode decode = codec.decode class IncrementalEncoder(mbc.MultibyteIncrementalEncoder, codecs.IncrementalEncoder): codec = codec class IncrementalDecoder(mbc.MultibyteIncrementalDecoder, codecs.IncrementalDecoder): codec = codec class StreamReader(Codec, mbc.MultibyteStreamReader, codecs.StreamReader): codec = codec class StreamWriter(Codec, mbc.MultibyteStreamWriter, codecs.StreamWriter): codec = codec def getregentry(): return codecs.CodecInfo( name='iso2022_kr', encode=Codec().encode, decode=Codec().decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamreader=StreamReader, streamwriter=StreamWriter, )
johab py python unicode codec for johab written by hyeshik chang perkyfreebsd org johab py python unicode codec for johab written by hye shik chang perky freebsd org
import _codecs_kr, codecs import _multibytecodec as mbc codec = _codecs_kr.getcodec('johab') class Codec(codecs.Codec): encode = codec.encode decode = codec.decode class IncrementalEncoder(mbc.MultibyteIncrementalEncoder, codecs.IncrementalEncoder): codec = codec class IncrementalDecoder(mbc.MultibyteIncrementalDecoder, codecs.IncrementalDecoder): codec = codec class StreamReader(Codec, mbc.MultibyteStreamReader, codecs.StreamReader): codec = codec class StreamWriter(Codec, mbc.MultibyteStreamWriter, codecs.StreamWriter): codec = codec def getregentry(): return codecs.CodecInfo( name='johab', encode=Codec().encode, decode=Codec().decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamreader=StreamReader, streamwriter=StreamWriter, )
python latin1 codec written by marcandre lemburg mallemburg com c cnri no warranty codec apis note binding these as c functions will result in the class not converting them to methods this is intended encodings module api codec apis note binding these as c functions will result in the class not converting them to methods this is intended encodings module api
import codecs class Codec(codecs.Codec): encode = codecs.latin_1_encode decode = codecs.latin_1_decode class IncrementalEncoder(codecs.IncrementalEncoder): def encode(self, input, final=False): return codecs.latin_1_encode(input,self.errors)[0] class IncrementalDecoder(codecs.IncrementalDecoder): def decode(self, input, final=False): return codecs.latin_1_decode(input,self.errors)[0] class StreamWriter(Codec,codecs.StreamWriter): pass class StreamReader(Codec,codecs.StreamReader): pass class StreamConverter(StreamWriter,StreamReader): encode = codecs.latin_1_decode decode = codecs.latin_1_encode def getregentry(): return codecs.CodecInfo( name='iso8859-1', encode=Codec.encode, decode=Codec.decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamreader=StreamReader, streamwriter=StreamWriter, )
python mbcs codec for windows cloned by mark hammond mhammondskippinet com au from ascii py which was written by marcandre lemburg mallemburg com c cnri no warranty import them explicitly to cause an importerror on nonwindows systems for incrementaldecoder incrementalencoder codec apis encodings module api import them explicitly to cause an importerror on non windows systems for incrementaldecoder incrementalencoder codec apis encodings module api
from codecs import mbcs_encode, mbcs_decode import codecs encode = mbcs_encode def decode(input, errors='strict'): return mbcs_decode(input, errors, True) class IncrementalEncoder(codecs.IncrementalEncoder): def encode(self, input, final=False): return mbcs_encode(input, self.errors)[0] class IncrementalDecoder(codecs.BufferedIncrementalDecoder): _buffer_decode = mbcs_decode class StreamWriter(codecs.StreamWriter): encode = mbcs_encode class StreamReader(codecs.StreamReader): decode = mbcs_decode def getregentry(): return codecs.CodecInfo( name='mbcs', encode=encode, decode=decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamreader=StreamReader, streamwriter=StreamWriter, )
python oem codec for windows import them explicitly to cause an importerror on nonwindows systems for incrementaldecoder incrementalencoder codec apis encodings module api import them explicitly to cause an importerror on non windows systems for incrementaldecoder incrementalencoder codec apis encodings module api
from codecs import oem_encode, oem_decode import codecs encode = oem_encode def decode(input, errors='strict'): return oem_decode(input, errors, True) class IncrementalEncoder(codecs.IncrementalEncoder): def encode(self, input, final=False): return oem_encode(input, self.errors)[0] class IncrementalDecoder(codecs.BufferedIncrementalDecoder): _buffer_decode = oem_decode class StreamWriter(codecs.StreamWriter): encode = oem_encode class StreamReader(codecs.StreamReader): decode = oem_decode def getregentry(): return codecs.CodecInfo( name='oem', encode=encode, decode=decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamreader=StreamReader, streamwriter=StreamWriter, )
codec for the punicode encoding as specified in rfc 3492 written by martin v lwis encoding 3 1 basic code point segregation base bytearray extended set for c in str if ordc 128 base appendordc else extended addc extended sortedextended return bytesbase extended def selectivelenstr max return a pair index pos indicating the next occurrence of char in str index is the position of the character considering only ordinals up to and including char and pos is the position in the full string indexpos is the starting position in the full string l lenstr while 1 pos 1 if pos l return 1 1 c strpos if c char return index1 pos elif c char index 1 def insertionunsortstr extended punycode parameters tmin 1 tmax 26 base 36 3 3 generalized variablelength integers result bytearray j 0 while 1 t tj bias if n t result appenddigitsn return bytesresult result appenddigitst n t 36 t n n t 36 t j 1 def adaptdelta first numchars if first delta 700 else delta 2 delta delta numchars base tmin tmax 2 455 divisions 0 while delta 455 delta delta 35 base tmin divisions 36 bias divisions 36 delta delta 38 return bias def generateintegersbaselen deltas punycode parameters initial bias 72 damp 700 skew 38 decoding 3 3 generalized variablelength integers result 0 w 1 j 0 while 1 try char ordextendedextpos except indexerror if errors strict raise unicodeerrorincomplete punicode string return extpos 1 none extpos 1 if 0x41 char 0x5a az digit char 0x41 elif 0x30 char 0x39 digit char 22 0x3026 elif errors strict raise unicodeerrorinvalid extended code point s extendedextpos1 else return extpos none t tj bias result digit w if digit t return extpos result w w 36 t j 1 def insertionsortbase extended errors there was an error in decoding we can t continue because synchronization is lost codec apis encodings module api encoding 3 1 basic code point segregation return the length of str considering only characters below max return a pair index pos indicating the next occurrence of char in str index is the position of the character considering only ordinals up to and including char and pos is the position in the full string index pos is the starting position in the full string 3 2 insertion unsort coding punycode parameters tmin 1 tmax 26 base 36 3 3 generalized variable length integers base tmin tmax 2 455 base tmin 3 4 bias adaptation punycode parameters initial bias 72 damp 700 skew 38 decoding 3 3 generalized variable length integers a z 0x30 26 3 2 insertion unsort coding there was an error in decoding we can t continue because synchronization is lost codec apis encodings module api
import codecs def segregate(str): base = bytearray() extended = set() for c in str: if ord(c) < 128: base.append(ord(c)) else: extended.add(c) extended = sorted(extended) return bytes(base), extended def selective_len(str, max): res = 0 for c in str: if ord(c) < max: res += 1 return res def selective_find(str, char, index, pos): l = len(str) while 1: pos += 1 if pos == l: return (-1, -1) c = str[pos] if c == char: return index+1, pos elif c < char: index += 1 def insertion_unsort(str, extended): oldchar = 0x80 result = [] oldindex = -1 for c in extended: index = pos = -1 char = ord(c) curlen = selective_len(str, char) delta = (curlen+1) * (char - oldchar) while 1: index,pos = selective_find(str,c,index,pos) if index == -1: break delta += index - oldindex result.append(delta-1) oldindex = index delta = 0 oldchar = char return result def T(j, bias): res = 36 * (j + 1) - bias if res < 1: return 1 if res > 26: return 26 return res digits = b"abcdefghijklmnopqrstuvwxyz0123456789" def generate_generalized_integer(N, bias): result = bytearray() j = 0 while 1: t = T(j, bias) if N < t: result.append(digits[N]) return bytes(result) result.append(digits[t + ((N - t) % (36 - t))]) N = (N - t) // (36 - t) j += 1 def adapt(delta, first, numchars): if first: delta //= 700 else: delta //= 2 delta += delta // numchars divisions = 0 while delta > 455: delta = delta // 35 divisions += 36 bias = divisions + (36 * delta // (delta + 38)) return bias def generate_integers(baselen, deltas): result = bytearray() bias = 72 for points, delta in enumerate(deltas): s = generate_generalized_integer(delta, bias) result.extend(s) bias = adapt(delta, points==0, baselen+points+1) return bytes(result) def punycode_encode(text): base, extended = segregate(text) deltas = insertion_unsort(text, extended) extended = generate_integers(len(base), deltas) if base: return base + b"-" + extended return extended def decode_generalized_number(extended, extpos, bias, errors): result = 0 w = 1 j = 0 while 1: try: char = ord(extended[extpos]) except IndexError: if errors == "strict": raise UnicodeError("incomplete punicode string") return extpos + 1, None extpos += 1 if 0x41 <= char <= 0x5A: digit = char - 0x41 elif 0x30 <= char <= 0x39: digit = char - 22 elif errors == "strict": raise UnicodeError("Invalid extended code point '%s'" % extended[extpos-1]) else: return extpos, None t = T(j, bias) result += digit * w if digit < t: return extpos, result w = w * (36 - t) j += 1 def insertion_sort(base, extended, errors): char = 0x80 pos = -1 bias = 72 extpos = 0 while extpos < len(extended): newpos, delta = decode_generalized_number(extended, extpos, bias, errors) if delta is None: return base pos += delta+1 char += pos // (len(base) + 1) if char > 0x10FFFF: if errors == "strict": raise UnicodeError("Invalid character U+%x" % char) char = ord('?') pos = pos % (len(base) + 1) base = base[:pos] + chr(char) + base[pos:] bias = adapt(delta, (extpos == 0), len(base)) extpos = newpos return base def punycode_decode(text, errors): if isinstance(text, str): text = text.encode("ascii") if isinstance(text, memoryview): text = bytes(text) pos = text.rfind(b"-") if pos == -1: base = "" extended = str(text, "ascii").upper() else: base = str(text[:pos], "ascii", errors) extended = str(text[pos+1:], "ascii").upper() return insertion_sort(base, extended, errors) class Codec(codecs.Codec): def encode(self, input, errors='strict'): res = punycode_encode(input) return res, len(input) def decode(self, input, errors='strict'): if errors not in ('strict', 'replace', 'ignore'): raise UnicodeError("Unsupported error handling "+errors) res = punycode_decode(input, errors) return res, len(input) class IncrementalEncoder(codecs.IncrementalEncoder): def encode(self, input, final=False): return punycode_encode(input) class IncrementalDecoder(codecs.IncrementalDecoder): def decode(self, input, final=False): if self.errors not in ('strict', 'replace', 'ignore'): raise UnicodeError("Unsupported error handling "+self.errors) return punycode_decode(input, self.errors) class StreamWriter(Codec,codecs.StreamWriter): pass class StreamReader(Codec,codecs.StreamReader): pass def getregentry(): return codecs.CodecInfo( name='punycode', encode=Codec().encode, decode=Codec().decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamwriter=StreamWriter, streamreader=StreamReader, )
codec for quotedprintable encoding this codec deencodes from bytes to bytes encodings module api encodings module api
import codecs import quopri from io import BytesIO def quopri_encode(input, errors='strict'): assert errors == 'strict' f = BytesIO(input) g = BytesIO() quopri.encode(f, g, quotetabs=True) return (g.getvalue(), len(input)) def quopri_decode(input, errors='strict'): assert errors == 'strict' f = BytesIO(input) g = BytesIO() quopri.decode(f, g) return (g.getvalue(), len(input)) class Codec(codecs.Codec): def encode(self, input, errors='strict'): return quopri_encode(input, errors) def decode(self, input, errors='strict'): return quopri_decode(input, errors) class IncrementalEncoder(codecs.IncrementalEncoder): def encode(self, input, final=False): return quopri_encode(input, self.errors)[0] class IncrementalDecoder(codecs.IncrementalDecoder): def decode(self, input, final=False): return quopri_decode(input, self.errors)[0] class StreamWriter(Codec, codecs.StreamWriter): charbuffertype = bytes class StreamReader(Codec, codecs.StreamReader): charbuffertype = bytes def getregentry(): return codecs.CodecInfo( name='quopri', encode=quopri_encode, decode=quopri_decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamwriter=StreamWriter, streamreader=StreamReader, _is_text_encoding=False, )
python rawunicodeescape codec written by marcandre lemburg mallemburg com c cnri no warranty codec apis note binding these as c functions will result in the class not converting them to methods this is intended encodings module api codec apis note binding these as c functions will result in the class not converting them to methods this is intended encodings module api
import codecs class Codec(codecs.Codec): encode = codecs.raw_unicode_escape_encode decode = codecs.raw_unicode_escape_decode class IncrementalEncoder(codecs.IncrementalEncoder): def encode(self, input, final=False): return codecs.raw_unicode_escape_encode(input, self.errors)[0] class IncrementalDecoder(codecs.BufferedIncrementalDecoder): def _buffer_decode(self, input, errors, final): return codecs.raw_unicode_escape_decode(input, errors, final) class StreamWriter(Codec,codecs.StreamWriter): pass class StreamReader(Codec,codecs.StreamReader): def decode(self, input, errors='strict'): return codecs.raw_unicode_escape_decode(input, errors, False) def getregentry(): return codecs.CodecInfo( name='raw-unicode-escape', encode=Codec.encode, decode=Codec.decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamwriter=StreamWriter, streamreader=StreamReader, )
usrbinenv python python character mapping codec for rot13 this codec deencodes from str to str written by marcandre lemburg mallemburg com codec apis encodings module api map filter api usr bin env python python character mapping codec for rot13 this codec de encodes from str to str written by marc andre lemburg mal lemburg com codec apis encodings module api map filter api
import codecs class Codec(codecs.Codec): def encode(self, input, errors='strict'): return (str.translate(input, rot13_map), len(input)) def decode(self, input, errors='strict'): return (str.translate(input, rot13_map), len(input)) class IncrementalEncoder(codecs.IncrementalEncoder): def encode(self, input, final=False): return str.translate(input, rot13_map) class IncrementalDecoder(codecs.IncrementalDecoder): def decode(self, input, final=False): return str.translate(input, rot13_map) class StreamWriter(Codec,codecs.StreamWriter): pass class StreamReader(Codec,codecs.StreamReader): pass def getregentry(): return codecs.CodecInfo( name='rot-13', encode=Codec().encode, decode=Codec().decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamwriter=StreamWriter, streamreader=StreamReader, _is_text_encoding=False, ) rot13_map = codecs.make_identity_dict(range(256)) rot13_map.update({ 0x0041: 0x004e, 0x0042: 0x004f, 0x0043: 0x0050, 0x0044: 0x0051, 0x0045: 0x0052, 0x0046: 0x0053, 0x0047: 0x0054, 0x0048: 0x0055, 0x0049: 0x0056, 0x004a: 0x0057, 0x004b: 0x0058, 0x004c: 0x0059, 0x004d: 0x005a, 0x004e: 0x0041, 0x004f: 0x0042, 0x0050: 0x0043, 0x0051: 0x0044, 0x0052: 0x0045, 0x0053: 0x0046, 0x0054: 0x0047, 0x0055: 0x0048, 0x0056: 0x0049, 0x0057: 0x004a, 0x0058: 0x004b, 0x0059: 0x004c, 0x005a: 0x004d, 0x0061: 0x006e, 0x0062: 0x006f, 0x0063: 0x0070, 0x0064: 0x0071, 0x0065: 0x0072, 0x0066: 0x0073, 0x0067: 0x0074, 0x0068: 0x0075, 0x0069: 0x0076, 0x006a: 0x0077, 0x006b: 0x0078, 0x006c: 0x0079, 0x006d: 0x007a, 0x006e: 0x0061, 0x006f: 0x0062, 0x0070: 0x0063, 0x0071: 0x0064, 0x0072: 0x0065, 0x0073: 0x0066, 0x0074: 0x0067, 0x0075: 0x0068, 0x0076: 0x0069, 0x0077: 0x006a, 0x0078: 0x006b, 0x0079: 0x006c, 0x007a: 0x006d, }) def rot13(infile, outfile): outfile.write(codecs.encode(infile.read(), 'rot-13')) if __name__ == '__main__': import sys rot13(sys.stdin, sys.stdout)
shiftjis py python unicode codec for shiftjis written by hyeshik chang perkyfreebsd org shift_jis py python unicode codec for shift_jis written by hye shik chang perky freebsd org
import _codecs_jp, codecs import _multibytecodec as mbc codec = _codecs_jp.getcodec('shift_jis') class Codec(codecs.Codec): encode = codec.encode decode = codec.decode class IncrementalEncoder(mbc.MultibyteIncrementalEncoder, codecs.IncrementalEncoder): codec = codec class IncrementalDecoder(mbc.MultibyteIncrementalDecoder, codecs.IncrementalDecoder): codec = codec class StreamReader(Codec, mbc.MultibyteStreamReader, codecs.StreamReader): codec = codec class StreamWriter(Codec, mbc.MultibyteStreamWriter, codecs.StreamWriter): codec = codec def getregentry(): return codecs.CodecInfo( name='shift_jis', encode=Codec().encode, decode=Codec().decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamreader=StreamReader, streamwriter=StreamWriter, )
shiftjis2004 py python unicode codec for shiftjis2004 written by hyeshik chang perkyfreebsd org shift_jis_2004 py python unicode codec for shift_jis_2004 written by hye shik chang perky freebsd org
import _codecs_jp, codecs import _multibytecodec as mbc codec = _codecs_jp.getcodec('shift_jis_2004') class Codec(codecs.Codec): encode = codec.encode decode = codec.decode class IncrementalEncoder(mbc.MultibyteIncrementalEncoder, codecs.IncrementalEncoder): codec = codec class IncrementalDecoder(mbc.MultibyteIncrementalDecoder, codecs.IncrementalDecoder): codec = codec class StreamReader(Codec, mbc.MultibyteStreamReader, codecs.StreamReader): codec = codec class StreamWriter(Codec, mbc.MultibyteStreamWriter, codecs.StreamWriter): codec = codec def getregentry(): return codecs.CodecInfo( name='shift_jis_2004', encode=Codec().encode, decode=Codec().decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamreader=StreamReader, streamwriter=StreamWriter, )
shiftjisx0213 py python unicode codec for shiftjisx0213 written by hyeshik chang perkyfreebsd org shift_jisx0213 py python unicode codec for shift_jisx0213 written by hye shik chang perky freebsd org
import _codecs_jp, codecs import _multibytecodec as mbc codec = _codecs_jp.getcodec('shift_jisx0213') class Codec(codecs.Codec): encode = codec.encode decode = codec.decode class IncrementalEncoder(mbc.MultibyteIncrementalEncoder, codecs.IncrementalEncoder): codec = codec class IncrementalDecoder(mbc.MultibyteIncrementalDecoder, codecs.IncrementalDecoder): codec = codec class StreamReader(Codec, mbc.MultibyteStreamReader, codecs.StreamReader): codec = codec class StreamWriter(Codec, mbc.MultibyteStreamWriter, codecs.StreamWriter): codec = codec def getregentry(): return codecs.CodecInfo( name='shift_jisx0213', encode=Codec().encode, decode=Codec().decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamreader=StreamReader, streamwriter=StreamWriter, )
python undefined codec this codec will always raise a valueerror exception when being used it is intended for use by the site py file to switch off automatic string to unicode coercion written by marcandre lemburg mallemburg com c cnri no warranty codec apis encodings module api codec apis encodings module api
import codecs class Codec(codecs.Codec): def encode(self,input,errors='strict'): raise UnicodeError("undefined encoding") def decode(self,input,errors='strict'): raise UnicodeError("undefined encoding") class IncrementalEncoder(codecs.IncrementalEncoder): def encode(self, input, final=False): raise UnicodeError("undefined encoding") class IncrementalDecoder(codecs.IncrementalDecoder): def decode(self, input, final=False): raise UnicodeError("undefined encoding") class StreamWriter(Codec,codecs.StreamWriter): pass class StreamReader(Codec,codecs.StreamReader): pass def getregentry(): return codecs.CodecInfo( name='undefined', encode=Codec().encode, decode=Codec().decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamwriter=StreamWriter, streamreader=StreamReader, )
python unicodeescape codec written by marcandre lemburg mallemburg com c cnri no warranty codec apis note binding these as c functions will result in the class not converting them to methods this is intended encodings module api codec apis note binding these as c functions will result in the class not converting them to methods this is intended encodings module api
import codecs class Codec(codecs.Codec): encode = codecs.unicode_escape_encode decode = codecs.unicode_escape_decode class IncrementalEncoder(codecs.IncrementalEncoder): def encode(self, input, final=False): return codecs.unicode_escape_encode(input, self.errors)[0] class IncrementalDecoder(codecs.BufferedIncrementalDecoder): def _buffer_decode(self, input, errors, final): return codecs.unicode_escape_decode(input, errors, final) class StreamWriter(Codec,codecs.StreamWriter): pass class StreamReader(Codec,codecs.StreamReader): def decode(self, input, errors='strict'): return codecs.unicode_escape_decode(input, errors, False) def getregentry(): return codecs.CodecInfo( name='unicode-escape', encode=Codec.encode, decode=Codec.decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamwriter=StreamWriter, streamreader=StreamReader, )
python utf16 codec written by marcandre lemburg mallemburg com c cnri no warranty codec apis state info we return to the caller 0 stream is in natural order for this platform 2 endianness hasn t been determined yet we re never writing in unnatural order additional state info from the base class must be none here as it isn t passed along to the caller additional state info we pass to the caller 0 stream is in natural order for this platform 1 stream is in unnatural order 2 endianness hasn t been determined yet state1 will be ignored by bufferedincrementaldecoder setstate encodings module api codec apis state info we return to the caller 0 stream is in natural order for this platform 2 endianness hasn t been determined yet we re never writing in unnatural order additional state info from the base class must be none here as it isn t passed along to the caller additional state info we pass to the caller 0 stream is in natural order for this platform 1 stream is in unnatural order 2 endianness hasn t been determined yet state 1 will be ignored by bufferedincrementaldecoder setstate encodings module api
import codecs, sys encode = codecs.utf_16_encode def decode(input, errors='strict'): return codecs.utf_16_decode(input, errors, True) class IncrementalEncoder(codecs.IncrementalEncoder): def __init__(self, errors='strict'): codecs.IncrementalEncoder.__init__(self, errors) self.encoder = None def encode(self, input, final=False): if self.encoder is None: result = codecs.utf_16_encode(input, self.errors)[0] if sys.byteorder == 'little': self.encoder = codecs.utf_16_le_encode else: self.encoder = codecs.utf_16_be_encode return result return self.encoder(input, self.errors)[0] def reset(self): codecs.IncrementalEncoder.reset(self) self.encoder = None def getstate(self): return (2 if self.encoder is None else 0) def setstate(self, state): if state: self.encoder = None else: if sys.byteorder == 'little': self.encoder = codecs.utf_16_le_encode else: self.encoder = codecs.utf_16_be_encode class IncrementalDecoder(codecs.BufferedIncrementalDecoder): def __init__(self, errors='strict'): codecs.BufferedIncrementalDecoder.__init__(self, errors) self.decoder = None def _buffer_decode(self, input, errors, final): if self.decoder is None: (output, consumed, byteorder) = \ codecs.utf_16_ex_decode(input, errors, 0, final) if byteorder == -1: self.decoder = codecs.utf_16_le_decode elif byteorder == 1: self.decoder = codecs.utf_16_be_decode elif consumed >= 2: raise UnicodeError("UTF-16 stream does not start with BOM") return (output, consumed) return self.decoder(input, self.errors, final) def reset(self): codecs.BufferedIncrementalDecoder.reset(self) self.decoder = None def getstate(self): state = codecs.BufferedIncrementalDecoder.getstate(self)[0] if self.decoder is None: return (state, 2) addstate = int((sys.byteorder == "big") != (self.decoder is codecs.utf_16_be_decode)) return (state, addstate) def setstate(self, state): codecs.BufferedIncrementalDecoder.setstate(self, state) state = state[1] if state == 0: self.decoder = (codecs.utf_16_be_decode if sys.byteorder == "big" else codecs.utf_16_le_decode) elif state == 1: self.decoder = (codecs.utf_16_le_decode if sys.byteorder == "big" else codecs.utf_16_be_decode) else: self.decoder = None class StreamWriter(codecs.StreamWriter): def __init__(self, stream, errors='strict'): codecs.StreamWriter.__init__(self, stream, errors) self.encoder = None def reset(self): codecs.StreamWriter.reset(self) self.encoder = None def encode(self, input, errors='strict'): if self.encoder is None: result = codecs.utf_16_encode(input, errors) if sys.byteorder == 'little': self.encoder = codecs.utf_16_le_encode else: self.encoder = codecs.utf_16_be_encode return result else: return self.encoder(input, errors) class StreamReader(codecs.StreamReader): def reset(self): codecs.StreamReader.reset(self) try: del self.decode except AttributeError: pass def decode(self, input, errors='strict'): (object, consumed, byteorder) = \ codecs.utf_16_ex_decode(input, errors, 0, False) if byteorder == -1: self.decode = codecs.utf_16_le_decode elif byteorder == 1: self.decode = codecs.utf_16_be_decode elif consumed>=2: raise UnicodeError("UTF-16 stream does not start with BOM") return (object, consumed) def getregentry(): return codecs.CodecInfo( name='utf-16', encode=encode, decode=decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamreader=StreamReader, streamwriter=StreamWriter, )
python utf16be codec written by marcandre lemburg mallemburg com c cnri no warranty codec apis encodings module api codec apis encodings module api
import codecs encode = codecs.utf_16_be_encode def decode(input, errors='strict'): return codecs.utf_16_be_decode(input, errors, True) class IncrementalEncoder(codecs.IncrementalEncoder): def encode(self, input, final=False): return codecs.utf_16_be_encode(input, self.errors)[0] class IncrementalDecoder(codecs.BufferedIncrementalDecoder): _buffer_decode = codecs.utf_16_be_decode class StreamWriter(codecs.StreamWriter): encode = codecs.utf_16_be_encode class StreamReader(codecs.StreamReader): decode = codecs.utf_16_be_decode def getregentry(): return codecs.CodecInfo( name='utf-16-be', encode=encode, decode=decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamreader=StreamReader, streamwriter=StreamWriter, )
python utf16le codec written by marcandre lemburg mallemburg com c cnri no warranty codec apis encodings module api codec apis encodings module api
import codecs encode = codecs.utf_16_le_encode def decode(input, errors='strict'): return codecs.utf_16_le_decode(input, errors, True) class IncrementalEncoder(codecs.IncrementalEncoder): def encode(self, input, final=False): return codecs.utf_16_le_encode(input, self.errors)[0] class IncrementalDecoder(codecs.BufferedIncrementalDecoder): _buffer_decode = codecs.utf_16_le_decode class StreamWriter(codecs.StreamWriter): encode = codecs.utf_16_le_encode class StreamReader(codecs.StreamReader): decode = codecs.utf_16_le_decode def getregentry(): return codecs.CodecInfo( name='utf-16-le', encode=encode, decode=decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamreader=StreamReader, streamwriter=StreamWriter, )
python utf32 codec codec apis state info we return to the caller 0 stream is in natural order for this platform 2 endianness hasn t been determined yet we re never writing in unnatural order additional state info from the base class must be none here as it isn t passed along to the caller additional state info we pass to the caller 0 stream is in natural order for this platform 1 stream is in unnatural order 2 endianness hasn t been determined yet state1 will be ignored by bufferedincrementaldecoder setstate encodings module api codec apis state info we return to the caller 0 stream is in natural order for this platform 2 endianness hasn t been determined yet we re never writing in unnatural order additional state info from the base class must be none here as it isn t passed along to the caller additional state info we pass to the caller 0 stream is in natural order for this platform 1 stream is in unnatural order 2 endianness hasn t been determined yet state 1 will be ignored by bufferedincrementaldecoder setstate encodings module api
import codecs, sys encode = codecs.utf_32_encode def decode(input, errors='strict'): return codecs.utf_32_decode(input, errors, True) class IncrementalEncoder(codecs.IncrementalEncoder): def __init__(self, errors='strict'): codecs.IncrementalEncoder.__init__(self, errors) self.encoder = None def encode(self, input, final=False): if self.encoder is None: result = codecs.utf_32_encode(input, self.errors)[0] if sys.byteorder == 'little': self.encoder = codecs.utf_32_le_encode else: self.encoder = codecs.utf_32_be_encode return result return self.encoder(input, self.errors)[0] def reset(self): codecs.IncrementalEncoder.reset(self) self.encoder = None def getstate(self): return (2 if self.encoder is None else 0) def setstate(self, state): if state: self.encoder = None else: if sys.byteorder == 'little': self.encoder = codecs.utf_32_le_encode else: self.encoder = codecs.utf_32_be_encode class IncrementalDecoder(codecs.BufferedIncrementalDecoder): def __init__(self, errors='strict'): codecs.BufferedIncrementalDecoder.__init__(self, errors) self.decoder = None def _buffer_decode(self, input, errors, final): if self.decoder is None: (output, consumed, byteorder) = \ codecs.utf_32_ex_decode(input, errors, 0, final) if byteorder == -1: self.decoder = codecs.utf_32_le_decode elif byteorder == 1: self.decoder = codecs.utf_32_be_decode elif consumed >= 4: raise UnicodeError("UTF-32 stream does not start with BOM") return (output, consumed) return self.decoder(input, self.errors, final) def reset(self): codecs.BufferedIncrementalDecoder.reset(self) self.decoder = None def getstate(self): state = codecs.BufferedIncrementalDecoder.getstate(self)[0] if self.decoder is None: return (state, 2) addstate = int((sys.byteorder == "big") != (self.decoder is codecs.utf_32_be_decode)) return (state, addstate) def setstate(self, state): codecs.BufferedIncrementalDecoder.setstate(self, state) state = state[1] if state == 0: self.decoder = (codecs.utf_32_be_decode if sys.byteorder == "big" else codecs.utf_32_le_decode) elif state == 1: self.decoder = (codecs.utf_32_le_decode if sys.byteorder == "big" else codecs.utf_32_be_decode) else: self.decoder = None class StreamWriter(codecs.StreamWriter): def __init__(self, stream, errors='strict'): self.encoder = None codecs.StreamWriter.__init__(self, stream, errors) def reset(self): codecs.StreamWriter.reset(self) self.encoder = None def encode(self, input, errors='strict'): if self.encoder is None: result = codecs.utf_32_encode(input, errors) if sys.byteorder == 'little': self.encoder = codecs.utf_32_le_encode else: self.encoder = codecs.utf_32_be_encode return result else: return self.encoder(input, errors) class StreamReader(codecs.StreamReader): def reset(self): codecs.StreamReader.reset(self) try: del self.decode except AttributeError: pass def decode(self, input, errors='strict'): (object, consumed, byteorder) = \ codecs.utf_32_ex_decode(input, errors, 0, False) if byteorder == -1: self.decode = codecs.utf_32_le_decode elif byteorder == 1: self.decode = codecs.utf_32_be_decode elif consumed>=4: raise UnicodeError("UTF-32 stream does not start with BOM") return (object, consumed) def getregentry(): return codecs.CodecInfo( name='utf-32', encode=encode, decode=decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamreader=StreamReader, streamwriter=StreamWriter, )
python utf32be codec codec apis encodings module api codec apis encodings module api
import codecs encode = codecs.utf_32_be_encode def decode(input, errors='strict'): return codecs.utf_32_be_decode(input, errors, True) class IncrementalEncoder(codecs.IncrementalEncoder): def encode(self, input, final=False): return codecs.utf_32_be_encode(input, self.errors)[0] class IncrementalDecoder(codecs.BufferedIncrementalDecoder): _buffer_decode = codecs.utf_32_be_decode class StreamWriter(codecs.StreamWriter): encode = codecs.utf_32_be_encode class StreamReader(codecs.StreamReader): decode = codecs.utf_32_be_decode def getregentry(): return codecs.CodecInfo( name='utf-32-be', encode=encode, decode=decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamreader=StreamReader, streamwriter=StreamWriter, )
python utf32le codec codec apis encodings module api codec apis encodings module api
import codecs encode = codecs.utf_32_le_encode def decode(input, errors='strict'): return codecs.utf_32_le_decode(input, errors, True) class IncrementalEncoder(codecs.IncrementalEncoder): def encode(self, input, final=False): return codecs.utf_32_le_encode(input, self.errors)[0] class IncrementalDecoder(codecs.BufferedIncrementalDecoder): _buffer_decode = codecs.utf_32_le_decode class StreamWriter(codecs.StreamWriter): encode = codecs.utf_32_le_encode class StreamReader(codecs.StreamReader): decode = codecs.utf_32_le_decode def getregentry(): return codecs.CodecInfo( name='utf-32-le', encode=encode, decode=decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamreader=StreamReader, streamwriter=StreamWriter, )
python utf7 codec written by brian quinlan briansweetapp com codec apis encodings module api codec apis encodings module api
import codecs encode = codecs.utf_7_encode def decode(input, errors='strict'): return codecs.utf_7_decode(input, errors, True) class IncrementalEncoder(codecs.IncrementalEncoder): def encode(self, input, final=False): return codecs.utf_7_encode(input, self.errors)[0] class IncrementalDecoder(codecs.BufferedIncrementalDecoder): _buffer_decode = codecs.utf_7_decode class StreamWriter(codecs.StreamWriter): encode = codecs.utf_7_encode class StreamReader(codecs.StreamReader): decode = codecs.utf_7_decode def getregentry(): return codecs.CodecInfo( name='utf-7', encode=encode, decode=decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamreader=StreamReader, streamwriter=StreamWriter, )
python utf8 codec written by marcandre lemburg mallemburg com c cnri no warranty codec apis encodings module api codec apis encodings module api
import codecs encode = codecs.utf_8_encode def decode(input, errors='strict'): return codecs.utf_8_decode(input, errors, True) class IncrementalEncoder(codecs.IncrementalEncoder): def encode(self, input, final=False): return codecs.utf_8_encode(input, self.errors)[0] class IncrementalDecoder(codecs.BufferedIncrementalDecoder): _buffer_decode = codecs.utf_8_decode class StreamWriter(codecs.StreamWriter): encode = codecs.utf_8_encode class StreamReader(codecs.StreamReader): decode = codecs.utf_8_decode def getregentry(): return codecs.CodecInfo( name='utf-8', encode=encode, decode=decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamreader=StreamReader, streamwriter=StreamWriter, )
python utf8sig codec this work similar to utf8 with the following changes on encodingwriting a utf8 encoded bom will be prependedwritten as the first three bytes on decodingreading if the first three bytes are a utf8 encoded bom these bytes will be skipped codec apis not enough data to decide if this really is a bom try again on the next call state1 must be 0 here as it isn t passed along to the caller state1 will be ignored by bufferedincrementaldecoder setstate not enough data to decide if this is a bom try again on the next call else no bom present encodings module api codec apis not enough data to decide if this really is a bom try again on the next call state 1 must be 0 here as it isn t passed along to the caller state 1 will be ignored by bufferedincrementaldecoder setstate not enough data to decide if this is a bom try again on the next call else no bom present encodings module api
import codecs def encode(input, errors='strict'): return (codecs.BOM_UTF8 + codecs.utf_8_encode(input, errors)[0], len(input)) def decode(input, errors='strict'): prefix = 0 if input[:3] == codecs.BOM_UTF8: input = input[3:] prefix = 3 (output, consumed) = codecs.utf_8_decode(input, errors, True) return (output, consumed+prefix) class IncrementalEncoder(codecs.IncrementalEncoder): def __init__(self, errors='strict'): codecs.IncrementalEncoder.__init__(self, errors) self.first = 1 def encode(self, input, final=False): if self.first: self.first = 0 return codecs.BOM_UTF8 + \ codecs.utf_8_encode(input, self.errors)[0] else: return codecs.utf_8_encode(input, self.errors)[0] def reset(self): codecs.IncrementalEncoder.reset(self) self.first = 1 def getstate(self): return self.first def setstate(self, state): self.first = state class IncrementalDecoder(codecs.BufferedIncrementalDecoder): def __init__(self, errors='strict'): codecs.BufferedIncrementalDecoder.__init__(self, errors) self.first = 1 def _buffer_decode(self, input, errors, final): if self.first: if len(input) < 3: if codecs.BOM_UTF8.startswith(input): return ("", 0) else: self.first = 0 else: self.first = 0 if input[:3] == codecs.BOM_UTF8: (output, consumed) = \ codecs.utf_8_decode(input[3:], errors, final) return (output, consumed+3) return codecs.utf_8_decode(input, errors, final) def reset(self): codecs.BufferedIncrementalDecoder.reset(self) self.first = 1 def getstate(self): state = codecs.BufferedIncrementalDecoder.getstate(self) return (state[0], self.first) def setstate(self, state): codecs.BufferedIncrementalDecoder.setstate(self, state) self.first = state[1] class StreamWriter(codecs.StreamWriter): def reset(self): codecs.StreamWriter.reset(self) try: del self.encode except AttributeError: pass def encode(self, input, errors='strict'): self.encode = codecs.utf_8_encode return encode(input, errors) class StreamReader(codecs.StreamReader): def reset(self): codecs.StreamReader.reset(self) try: del self.decode except AttributeError: pass def decode(self, input, errors='strict'): if len(input) < 3: if codecs.BOM_UTF8.startswith(input): return ("", 0) elif input[:3] == codecs.BOM_UTF8: self.decode = codecs.utf_8_decode (output, consumed) = codecs.utf_8_decode(input[3:],errors) return (output, consumed+3) self.decode = codecs.utf_8_decode return codecs.utf_8_decode(input, errors) def getregentry(): return codecs.CodecInfo( name='utf-8-sig', encode=encode, decode=decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamreader=StreamReader, streamwriter=StreamWriter, )
python uucodec codec uu content transfer encoding this codec deencodes from bytes to bytes written by marcandre lemburg mallemburg com some details were adapted from uu py which was written by lance ellinghouse and modified by jack jansen and fredrik lundh codec apis remove newline chars from filename encode find start of encoded data decode workaround for broken uuencoders by fredrik lundh sys stderr writewarning sn strv encodings module api codec apis remove newline chars from filename encode find start of encoded data decode workaround for broken uuencoders by fredrik lundh sys stderr write warning s n str v encodings module api
import codecs import binascii from io import BytesIO def uu_encode(input, errors='strict', filename='<data>', mode=0o666): assert errors == 'strict' infile = BytesIO(input) outfile = BytesIO() read = infile.read write = outfile.write filename = filename.replace('\n','\\n') filename = filename.replace('\r','\\r') write(('begin %o %s\n' % (mode & 0o777, filename)).encode('ascii')) chunk = read(45) while chunk: write(binascii.b2a_uu(chunk)) chunk = read(45) write(b' \nend\n') return (outfile.getvalue(), len(input)) def uu_decode(input, errors='strict'): assert errors == 'strict' infile = BytesIO(input) outfile = BytesIO() readline = infile.readline write = outfile.write while 1: s = readline() if not s: raise ValueError('Missing "begin" line in input data') if s[:5] == b'begin': break while True: s = readline() if not s or s == b'end\n': break try: data = binascii.a2b_uu(s) except binascii.Error as v: nbytes = (((s[0]-32) & 63) * 4 + 5) // 3 data = binascii.a2b_uu(s[:nbytes]) write(data) if not s: raise ValueError('Truncated input data') return (outfile.getvalue(), len(input)) class Codec(codecs.Codec): def encode(self, input, errors='strict'): return uu_encode(input, errors) def decode(self, input, errors='strict'): return uu_decode(input, errors) class IncrementalEncoder(codecs.IncrementalEncoder): def encode(self, input, final=False): return uu_encode(input, self.errors)[0] class IncrementalDecoder(codecs.IncrementalDecoder): def decode(self, input, final=False): return uu_decode(input, self.errors)[0] class StreamWriter(Codec, codecs.StreamWriter): charbuffertype = bytes class StreamReader(Codec, codecs.StreamReader): charbuffertype = bytes def getregentry(): return codecs.CodecInfo( name='uu', encode=uu_encode, decode=uu_decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamreader=StreamReader, streamwriter=StreamWriter, _is_text_encoding=False, )
python zlibcodec codec zlib compression encoding this codec deencodes from bytes to bytes written by marcandre lemburg mallemburg com codec apis encodings module api this codec needs the optional zlib module codec apis encodings module api
import codecs import zlib def zlib_encode(input, errors='strict'): assert errors == 'strict' return (zlib.compress(input), len(input)) def zlib_decode(input, errors='strict'): assert errors == 'strict' return (zlib.decompress(input), len(input)) class Codec(codecs.Codec): def encode(self, input, errors='strict'): return zlib_encode(input, errors) def decode(self, input, errors='strict'): return zlib_decode(input, errors) class IncrementalEncoder(codecs.IncrementalEncoder): def __init__(self, errors='strict'): assert errors == 'strict' self.errors = errors self.compressobj = zlib.compressobj() def encode(self, input, final=False): if final: c = self.compressobj.compress(input) return c + self.compressobj.flush() else: return self.compressobj.compress(input) def reset(self): self.compressobj = zlib.compressobj() class IncrementalDecoder(codecs.IncrementalDecoder): def __init__(self, errors='strict'): assert errors == 'strict' self.errors = errors self.decompressobj = zlib.decompressobj() def decode(self, input, final=False): if final: c = self.decompressobj.decompress(input) return c + self.decompressobj.flush() else: return self.decompressobj.decompress(input) def reset(self): self.decompressobj = zlib.decompressobj() class StreamWriter(Codec, codecs.StreamWriter): charbuffertype = bytes class StreamReader(Codec, codecs.StreamReader): charbuffertype = bytes def getregentry(): return codecs.CodecInfo( name='zlib', encode=zlib_encode, decode=zlib_decode, incrementalencoder=IncrementalEncoder, incrementaldecoder=IncrementalDecoder, streamreader=StreamReader, streamwriter=StreamWriter, _is_text_encoding=False, )
packages bundled in ensurepip bundled have wheelname set packages from wheelpkgdir have wheelpath set directory of system wheel packages some linux distribution packaging policies recommend against bundling dependencies for example fedora installs wheel packages in the usrsharepythonwheels directory and don t install the ensurepip bundled package ignore path doesn t exist or permission error make the code deterministic if a directory contains multiple wheel files of the same package but don t attempt to implement correct version comparison since this case should not happen filename is like pip21 2 4py3noneany whl extract 21 2 4 from pip21 2 4py3noneany whl only used the wheel package directory if all packages are found there run the bootstrapping in a subprocess to avoid leaking any state that happens after pip has executed particularly this avoids the case when pip holds onto the files in additionalpaths preventing us to remove them at the end of the invocation cmd sys executable w ignore deprecationwarning c code if sys flags isolated run code in isolated mode if currently running isolated cmd insert1 i return subprocess runcmd checktrue returncode def version return getpackages pip version def disablepipconfigurationsettings we deliberately ignore all pip environment variables when invoking pip see http bugs python orgissue19734 for details keystoremove k for k in os environ if k startswithpip for k in keystoremove del os environk we also ignore the settings in the default pip configuration file see http bugs python orgissue20053 for details os environ pipconfigfile os devnull def bootstrap rootnone upgradefalse userfalse altinstallfalse defaultpipfalse verbosity0 discard the return value bootstraprootroot upgradeupgrade useruser altinstallaltinstall defaultpipdefaultpip verbosityverbosity def bootstrap rootnone upgradefalse userfalse altinstallfalse defaultpipfalse verbosity0 if altinstall and defaultpip raise valueerrorcannot use altinstall and defaultpip together sys auditensurepip bootstrap root disablepipconfigurationsettings by default installing pip installs all of the following scripts x y running python version pip pipx pipx y pip 1 5 allows ensurepip to request that some of those be left out if altinstall omit pip pipx os environensurepipoptions altinstall elif not defaultpip omit pip os environensurepipoptions install with tempfile temporarydirectory as tmpdir put our bundled wheels into a temporary directory and construct the additional paths that need added to sys path additionalpaths for name package in getpackages items if package wheelname use bundled wheel package wheelname package wheelname wheelpath resources filesensurepip bundled wheelname whl wheelpath readbytes else use the wheel package directory with openpackage wheelpath rb as fp whl fp read wheelname os path basenamepackage wheelpath filename os path jointmpdir wheelname with openfilename wb as fp fp writewhl additionalpaths appendfilename construct the arguments to be passed to the pip command args install nocachedir noindex findlinks tmpdir if root args root root if upgrade args upgrade if user args user if verbosity args v verbosity return runpipargs packagenames additionalpaths def uninstallhelper verbosity0 nothing to do if pip was never installed or has been removed try import pip except importerror return if the installed pip version doesn t match the available one leave it alone availableversion version if pip version availableversion printfensurepip will only uninstall a matching version fpip version r installed favailableversion r available filesys stderr return disablepipconfigurationsettings construct the arguments to be passed to the pip command args uninstall y disablepipversioncheck if verbosity args v verbosity return runpipargs reversedpackagenames def mainargvnone import argparse parser argparse argumentparserprogpython m ensurepip parser addargument version actionversion versionpip formatversion helpshow the version of pip that is bundled with this python parser addargument v verbose actioncount default0 destverbosity helpgive more output option is additive and can be used up to 3 times parser addargument u upgrade actionstoretrue defaultfalse helpupgrade pip and dependencies even if already installed parser addargument user actionstoretrue defaultfalse helpinstall using the user scheme parser addargument root defaultnone helpinstall everything relative to this alternate root directory parser addargument altinstall actionstoretrue defaultfalse helpmake an alternate install installing only the x y versioned scripts default pipx pipx y parser addargument defaultpip actionstoretrue defaultfalse helpmake a default pip install installing the unqualified pip in addition to the versioned scripts args parser parseargsargv return bootstrap rootargs root upgradeargs upgrade userargs user verbosityargs verbosity altinstallargs altinstall defaultpipargs defaultpip packages bundled in ensurepip _bundled have wheel_name set packages from wheel_pkg_dir have wheel_path set directory of system wheel packages some linux distribution packaging policies recommend against bundling dependencies for example fedora installs wheel packages in the usr share python wheels directory and don t install the ensurepip _bundled package ignore path doesn t exist or permission error make the code deterministic if a directory contains multiple wheel files of the same package but don t attempt to implement correct version comparison since this case should not happen filename is like pip 21 2 4 py3 none any whl extract 21 2 4 from pip 21 2 4 py3 none any whl only used the wheel package directory if all packages are found there run the bootstrapping in a subprocess to avoid leaking any state that happens after pip has executed particularly this avoids the case when pip holds onto the files in additional_paths preventing us to remove them at the end of the invocation import runpy import sys sys path additional_paths or sys path sys argv 1 args runpy run_module pip run_name __main__ alter_sys true run code in isolated mode if currently running isolated returns a string specifying the bundled version of pip we deliberately ignore all pip environment variables when invoking pip see http bugs python org issue19734 for details we also ignore the settings in the default pip configuration file see http bugs python org issue20053 for details bootstrap pip into the current python installation or the given root directory note that calling this function will alter both sys path and os environ discard the return value bootstrap pip into the current python installation or the given root directory returns pip command status code note that calling this function will alter both sys path and os environ by default installing pip installs all of the following scripts x y running python version pip pipx pipx y pip 1 5 allows ensurepip to request that some of those be left out omit pip pipx omit pip put our bundled wheels into a temporary directory and construct the additional paths that need added to sys path use bundled wheel package use the wheel package directory construct the arguments to be passed to the pip command helper to support a clean default uninstall process on windows note that calling this function may alter os environ nothing to do if pip was never installed or has been removed if the installed pip version doesn t match the available one leave it alone construct the arguments to be passed to the pip command
import collections import os import os.path import subprocess import sys import sysconfig import tempfile from importlib import resources __all__ = ["version", "bootstrap"] _PACKAGE_NAMES = ('pip',) _PIP_VERSION = "23.2.1" _PROJECTS = [ ("pip", _PIP_VERSION, "py3"), ] _Package = collections.namedtuple('Package', ('version', 'wheel_name', 'wheel_path')) _WHEEL_PKG_DIR = sysconfig.get_config_var('WHEEL_PKG_DIR') def _find_packages(path): packages = {} try: filenames = os.listdir(path) except OSError: filenames = () filenames = sorted(filenames) for filename in filenames: if not filename.endswith(".whl"): continue for name in _PACKAGE_NAMES: prefix = name + '-' if filename.startswith(prefix): break else: continue version = filename.removeprefix(prefix).partition('-')[0] wheel_path = os.path.join(path, filename) packages[name] = _Package(version, None, wheel_path) return packages def _get_packages(): global _PACKAGES, _WHEEL_PKG_DIR if _PACKAGES is not None: return _PACKAGES packages = {} for name, version, py_tag in _PROJECTS: wheel_name = f"{name}-{version}-{py_tag}-none-any.whl" packages[name] = _Package(version, wheel_name, None) if _WHEEL_PKG_DIR: dir_packages = _find_packages(_WHEEL_PKG_DIR) if all(name in dir_packages for name in _PACKAGE_NAMES): packages = dir_packages _PACKAGES = packages return packages _PACKAGES = None def _run_pip(args, additional_paths=None): code = f cmd = [ sys.executable, '-W', 'ignore::DeprecationWarning', '-c', code, ] if sys.flags.isolated: cmd.insert(1, '-I') return subprocess.run(cmd, check=True).returncode def version(): return _get_packages()['pip'].version def _disable_pip_configuration_settings(): keys_to_remove = [k for k in os.environ if k.startswith("PIP_")] for k in keys_to_remove: del os.environ[k] os.environ['PIP_CONFIG_FILE'] = os.devnull def bootstrap(*, root=None, upgrade=False, user=False, altinstall=False, default_pip=False, verbosity=0): _bootstrap(root=root, upgrade=upgrade, user=user, altinstall=altinstall, default_pip=default_pip, verbosity=verbosity) def _bootstrap(*, root=None, upgrade=False, user=False, altinstall=False, default_pip=False, verbosity=0): if altinstall and default_pip: raise ValueError("Cannot use altinstall and default_pip together") sys.audit("ensurepip.bootstrap", root) _disable_pip_configuration_settings() if altinstall: os.environ["ENSUREPIP_OPTIONS"] = "altinstall" elif not default_pip: os.environ["ENSUREPIP_OPTIONS"] = "install" with tempfile.TemporaryDirectory() as tmpdir: additional_paths = [] for name, package in _get_packages().items(): if package.wheel_name: wheel_name = package.wheel_name wheel_path = resources.files("ensurepip") / "_bundled" / wheel_name whl = wheel_path.read_bytes() else: with open(package.wheel_path, "rb") as fp: whl = fp.read() wheel_name = os.path.basename(package.wheel_path) filename = os.path.join(tmpdir, wheel_name) with open(filename, "wb") as fp: fp.write(whl) additional_paths.append(filename) args = ["install", "--no-cache-dir", "--no-index", "--find-links", tmpdir] if root: args += ["--root", root] if upgrade: args += ["--upgrade"] if user: args += ["--user"] if verbosity: args += ["-" + "v" * verbosity] return _run_pip([*args, *_PACKAGE_NAMES], additional_paths) def _uninstall_helper(*, verbosity=0): try: import pip except ImportError: return available_version = version() if pip.__version__ != available_version: print(f"ensurepip will only uninstall a matching version " f"({pip.__version__!r} installed, " f"{available_version!r} available)", file=sys.stderr) return _disable_pip_configuration_settings() args = ["uninstall", "-y", "--disable-pip-version-check"] if verbosity: args += ["-" + "v" * verbosity] return _run_pip([*args, *reversed(_PACKAGE_NAMES)]) def _main(argv=None): import argparse parser = argparse.ArgumentParser(prog="python -m ensurepip") parser.add_argument( "--version", action="version", version="pip {}".format(version()), help="Show the version of pip that is bundled with this Python.", ) parser.add_argument( "-v", "--verbose", action="count", default=0, dest="verbosity", help=("Give more output. Option is additive, and can be used up to 3 " "times."), ) parser.add_argument( "-U", "--upgrade", action="store_true", default=False, help="Upgrade pip and dependencies, even if already installed.", ) parser.add_argument( "--user", action="store_true", default=False, help="Install using the user scheme.", ) parser.add_argument( "--root", default=None, help="Install everything relative to this alternate root directory.", ) parser.add_argument( "--altinstall", action="store_true", default=False, help=("Make an alternate install, installing only the X.Y versioned " "scripts (Default: pipX, pipX.Y)."), ) parser.add_argument( "--default-pip", action="store_true", default=False, help=("Make a default pip install, installing the unqualified pip " "in addition to the versioned scripts."), ) args = parser.parse_args(argv) return _bootstrap( root=args.root, upgrade=args.upgrade, user=args.user, verbosity=args.verbosity, altinstall=args.altinstall, default_pip=args.default_pip, )
basic pip uninstallation support helper for the windows uninstaller import argparse import ensurepip import sys def mainargvnone parser argparse argumentparserprogpython m ensurepip uninstall parser addargument version actionversion versionpip formatensurepip version helpshow the version of pip this will attempt to uninstall parser addargument v verbose actioncount default0 destverbosity helpgive more output option is additive and can be used up to 3 times args parser parseargsargv return ensurepip uninstallhelperverbosityargs verbosity if name main sys exitmain
import argparse import ensurepip import sys def _main(argv=None): parser = argparse.ArgumentParser(prog="python -m ensurepip._uninstall") parser.add_argument( "--version", action="version", version="pip {}".format(ensurepip.version()), help="Show the version of pip this will attempt to uninstall.", ) parser.add_argument( "-v", "--verbose", action="count", default=0, dest="verbosity", help=("Give more output. Option is additive, and can be used up to 3 " "times."), ) args = parser.parse_args(argv) return ensurepip._uninstall_helper(verbosity=args.verbosity) if __name__ == "__main__": sys.exit(_main())
utilities for comparing files and directories classes dircmp functions cmpf1 f2 shallowtrue int cmpfilesa b common clearcache clear the filecmp cache cache clear def cmpf1 f2 shallowtrue s1 sigos statf1 s2 sigos statf2 if s10 stat sifreg or s20 stat sifreg return false if shallow and s1 s2 return true if s11 s21 return false outcome cache getf1 f2 s1 s2 if outcome is none outcome docmpf1 f2 if lencache 100 limit the maximum size of the cache clearcache cachef1 f2 s1 s2 outcome return outcome def sigst return stat sifmtst stmode st stsize st stmtime def docmpf1 f2 bufsize bufsize with openf1 rb as fp1 openf2 rb as fp2 while true b1 fp1 readbufsize b2 fp2 readbufsize if b1 b2 return false if not b1 return true directory comparison class class dircmp def initself a b ignorenone hidenone initialize self left a self right b if hide is none self hide os curdir os pardir names never to be shown else self hide hide if ignore is none self ignore defaultignores else self ignore ignore def phase0self compare everything except common subdirectories self leftlist filteros listdirself left self hideself ignore self rightlist filteros listdirself right self hideself ignore self leftlist sort self rightlist sort def phase1self compute common names a dictzipmapos path normcase self leftlist self leftlist b dictzipmapos path normcase self rightlist self rightlist self common listmapa getitem filterb contains a self leftonly listmapa getitem filterfalseb contains a self rightonly listmapb getitem filterfalsea contains b def phase2self distinguish files directories funnies self commondirs self commonfiles self commonfunny for x in self common apath os path joinself left x bpath os path joinself right x ok true try astat os statapath except oserror print can t stat apath why args1 ok false try bstat os statbpath except oserror print can t stat bpath why args1 ok false if ok atype stat sifmtastat stmode btype stat sifmtbstat stmode if atype btype self commonfunny appendx elif stat sisdiratype self commondirs appendx elif stat sisregatype self commonfiles appendx else self commonfunny appendx else self commonfunny appendx def phase3self find out differences between common files xx cmpfilesself left self right self commonfiles self samefiles self difffiles self funnyfiles xx def phase4self find out differences between common subdirectories a new dircmp or mydircmp if dircmp was subclassed object is created for each common subdirectory these are stored in a dictionary indexed by filename the hide and ignore properties are inherited from the parent self subdirs for x in self commondirs ax os path joinself left x bx os path joinself right x self subdirsx self classax bx self ignore self hide def phase4closureself recursively call phase4 on subdirectories self phase4 for sd in self subdirs values sd phase4closure def reportself print a report on the differences between a and b output format is purposely lousy print diff self left self right if self leftonly self leftonly sort print only in self left self leftonly if self rightonly self rightonly sort print only in self right self rightonly if self samefiles self samefiles sort print identical files self samefiles if self difffiles self difffiles sort print differing files self difffiles if self funnyfiles self funnyfiles sort print trouble with common files self funnyfiles if self commondirs self commondirs sort print common subdirectories self commondirs if self commonfunny self commonfunny sort print common funny cases self commonfunny def reportpartialclosureself print reports on self and on subdirs self report for sd in self subdirs values print sd report def reportfullclosureself report on self and subdirs recursively self report for sd in self subdirs values print sd reportfullclosure methodmap dictsubdirsphase4 samefilesphase3 difffilesphase3 funnyfilesphase3 commondirsphase2 commonfilesphase2 commonfunnyphase2 commonphase1 leftonlyphase1 rightonlyphase1 leftlistphase0 rightlistphase0 def getattrself attr if attr not in self methodmap raise attributeerrorattr self methodmapattrself return getattrself attr classgetitem classmethodgenericalias def cmpfilesa b common shallowtrue res for x in common ax os path joina x bx os path joinb x rescmpax bx shallow appendx return res compare two files return 0 for equal 1 for different 2 for funny cases can t stat etc def cmpa b sh absabs cmpcmp try return not abscmpa b sh except oserror return 2 return a copy with items that occur in skip removed def filterflist skip return listfilterfalseskip contains flist demonstration and testing def demo import sys import getopt options args getopt getoptsys argv1 r if lenargs 2 raise getopt getopterror need exactly two args none dd dircmpargs0 args1 if r in options dd reportfullclosure else dd report if name main demo clear the filecmp cache compare two files arguments f1 first file name f2 second file name shallow treat files as identical if their stat signatures type size mtime are identical otherwise files are considered different if their sizes or contents differ default true return value true if the files are the same false otherwise this function uses a cache for past comparisons and the results with cache entries invalidated if their stat information changes the cache may be cleared by calling clear_cache limit the maximum size of the cache directory comparison class a class that manages the comparison of 2 directories dircmp a b ignore none hide none a and b are directories ignore is a list of names to ignore defaults to default_ignores hide is a list of names to hide defaults to os curdir os pardir high level usage x dircmp dir1 dir2 x report prints a report on the differences between dir1 and dir2 or x report_partial_closure prints report on differences between dir1 and dir2 and reports on common immediate subdirectories x report_full_closure like report_partial_closure but fully recursive attributes left_list right_list the files in dir1 and dir2 filtered by hide and ignore common a list of names in both dir1 and dir2 left_only right_only names only in dir1 dir2 common_dirs subdirectories in both dir1 and dir2 common_files files in both dir1 and dir2 common_funny names in both dir1 and dir2 where the type differs between dir1 and dir2 or the name is not stat able same_files list of identical files diff_files list of filenames which differ funny_files list of files which could not be compared subdirs a dictionary of dircmp instances or mydircmp instances if this object is of type mydircmp a subclass of dircmp keyed by names in common_dirs initialize names never to be shown compare everything except common subdirectories compute common names distinguish files directories funnies print can t stat a_path why args 1 print can t stat b_path why args 1 find out differences between common files find out differences between common subdirectories a new dircmp or mydircmp if dircmp was subclassed object is created for each common subdirectory these are stored in a dictionary indexed by filename the hide and ignore properties are inherited from the parent recursively call phase4 on subdirectories print a report on the differences between a and b output format is purposely lousy print reports on self and on subdirs report on self and subdirs recursively compare common files in two directories a b directory names common list of file names found in both directories shallow if true do comparison based solely on stat information returns a tuple of three lists files that compare equal files that are different filenames that aren t regular files compare two files return 0 for equal 1 for different 2 for funny cases can t stat etc return a copy with items that occur in skip removed demonstration and testing
import os import stat from itertools import filterfalse from types import GenericAlias __all__ = ['clear_cache', 'cmp', 'dircmp', 'cmpfiles', 'DEFAULT_IGNORES'] _cache = {} BUFSIZE = 8*1024 DEFAULT_IGNORES = [ 'RCS', 'CVS', 'tags', '.git', '.hg', '.bzr', '_darcs', '__pycache__'] def clear_cache(): _cache.clear() def cmp(f1, f2, shallow=True): s1 = _sig(os.stat(f1)) s2 = _sig(os.stat(f2)) if s1[0] != stat.S_IFREG or s2[0] != stat.S_IFREG: return False if shallow and s1 == s2: return True if s1[1] != s2[1]: return False outcome = _cache.get((f1, f2, s1, s2)) if outcome is None: outcome = _do_cmp(f1, f2) if len(_cache) > 100: clear_cache() _cache[f1, f2, s1, s2] = outcome return outcome def _sig(st): return (stat.S_IFMT(st.st_mode), st.st_size, st.st_mtime) def _do_cmp(f1, f2): bufsize = BUFSIZE with open(f1, 'rb') as fp1, open(f2, 'rb') as fp2: while True: b1 = fp1.read(bufsize) b2 = fp2.read(bufsize) if b1 != b2: return False if not b1: return True class dircmp: def __init__(self, a, b, ignore=None, hide=None): self.left = a self.right = b if hide is None: self.hide = [os.curdir, os.pardir] else: self.hide = hide if ignore is None: self.ignore = DEFAULT_IGNORES else: self.ignore = ignore def phase0(self): self.left_list = _filter(os.listdir(self.left), self.hide+self.ignore) self.right_list = _filter(os.listdir(self.right), self.hide+self.ignore) self.left_list.sort() self.right_list.sort() def phase1(self): a = dict(zip(map(os.path.normcase, self.left_list), self.left_list)) b = dict(zip(map(os.path.normcase, self.right_list), self.right_list)) self.common = list(map(a.__getitem__, filter(b.__contains__, a))) self.left_only = list(map(a.__getitem__, filterfalse(b.__contains__, a))) self.right_only = list(map(b.__getitem__, filterfalse(a.__contains__, b))) def phase2(self): self.common_dirs = [] self.common_files = [] self.common_funny = [] for x in self.common: a_path = os.path.join(self.left, x) b_path = os.path.join(self.right, x) ok = True try: a_stat = os.stat(a_path) except OSError: ok = False try: b_stat = os.stat(b_path) except OSError: ok = False if ok: a_type = stat.S_IFMT(a_stat.st_mode) b_type = stat.S_IFMT(b_stat.st_mode) if a_type != b_type: self.common_funny.append(x) elif stat.S_ISDIR(a_type): self.common_dirs.append(x) elif stat.S_ISREG(a_type): self.common_files.append(x) else: self.common_funny.append(x) else: self.common_funny.append(x) def phase3(self): xx = cmpfiles(self.left, self.right, self.common_files) self.same_files, self.diff_files, self.funny_files = xx def phase4(self): self.subdirs = {} for x in self.common_dirs: a_x = os.path.join(self.left, x) b_x = os.path.join(self.right, x) self.subdirs[x] = self.__class__(a_x, b_x, self.ignore, self.hide) def phase4_closure(self): self.phase4() for sd in self.subdirs.values(): sd.phase4_closure() def report(self): print('diff', self.left, self.right) if self.left_only: self.left_only.sort() print('Only in', self.left, ':', self.left_only) if self.right_only: self.right_only.sort() print('Only in', self.right, ':', self.right_only) if self.same_files: self.same_files.sort() print('Identical files :', self.same_files) if self.diff_files: self.diff_files.sort() print('Differing files :', self.diff_files) if self.funny_files: self.funny_files.sort() print('Trouble with common files :', self.funny_files) if self.common_dirs: self.common_dirs.sort() print('Common subdirectories :', self.common_dirs) if self.common_funny: self.common_funny.sort() print('Common funny cases :', self.common_funny) def report_partial_closure(self): self.report() for sd in self.subdirs.values(): print() sd.report() def report_full_closure(self): self.report() for sd in self.subdirs.values(): print() sd.report_full_closure() methodmap = dict(subdirs=phase4, same_files=phase3, diff_files=phase3, funny_files=phase3, common_dirs=phase2, common_files=phase2, common_funny=phase2, common=phase1, left_only=phase1, right_only=phase1, left_list=phase0, right_list=phase0) def __getattr__(self, attr): if attr not in self.methodmap: raise AttributeError(attr) self.methodmap[attr](self) return getattr(self, attr) __class_getitem__ = classmethod(GenericAlias) def cmpfiles(a, b, common, shallow=True): res = ([], [], []) for x in common: ax = os.path.join(a, x) bx = os.path.join(b, x) res[_cmp(ax, bx, shallow)].append(x) return res def _cmp(a, b, sh, abs=abs, cmp=cmp): try: return not abs(cmp(a, b, sh)) except OSError: return 2 def _filter(flist, skip): return list(filterfalse(skip.__contains__, flist)) def demo(): import sys import getopt options, args = getopt.getopt(sys.argv[1:], 'r') if len(args) != 2: raise getopt.GetoptError('need exactly two args', None) dd = dircmp(args[0], args[1]) if ('-r', '') in options: dd.report_full_closure() else: dd.report() if __name__ == '__main__': demo()
helper class to quickly write a loop over all standard input files typical use is import fileinput for line in fileinput inputencodingutf8 processline this iterates over the lines of all files listed in sys argv1 defaulting to sys stdin if the list is empty if a filename is it is also replaced by sys stdin and the optional arguments mode and openhook are ignored to specify an alternative list of filenames pass it as the argument to input a single file name is also allowed functions filename lineno return the filename and cumulative line number of the line that has just been read filelineno returns its line number in the current file isfirstline returns true iff the line just read is the first line of its file isstdin returns true iff the line was read from sys stdin function nextfile closes the current file so that the next iteration will read the first line from the next file if any lines not read from the file will not count towards the cumulative line count the filename is not changed until after the first line of the next file has been read function close closes the sequence before any lines have been read filename returns none and both line numbers are zero nextfile has no effect after all lines have been read filename and the line number functions return the values pertaining to the last line read nextfile has no effect all files are opened in text mode by default you can override this by setting the mode parameter to input or fileinput init if an io error occurs during opening or reading a file the oserror exception is raised if sys stdin is used more than once the second and further use will return no lines except perhaps for interactive use or if it has been explicitly reset e g using sys stdin seek0 empty files are opened and immediately closed the only time their presence in the list of filenames is noticeable at all is when the last file opened is empty it is possible that the last line of a file doesn t end in a newline character otherwise lines are returned including the trailing newline class fileinput is the implementation its methods filename lineno fileline isfirstline isstdin nextfile and close correspond to the functions in the module in addition it has a readline method which returns the next input line and a getitem method which implements the sequence behavior the sequence must be accessed in strictly sequential order sequence access and readline cannot be mixed optional inplace filtering if the keyword argument inplace1 is passed to input or to the fileinput constructor the file is moved to a backup file and standard output is directed to the input file this makes it possible to write a filter that rewrites its input file in place if the keyword argument backup some extension is also given it specifies the extension for the backup file and the backup file remains around by default the extension is bak and it is deleted when the output file is closed inplace filtering is disabled when standard input is read xxx the current implementation does not work for msdos 83 filesystems return an instance of the fileinput class which can be iterated the parameters are passed to the constructor of the fileinput class the returned instance in addition to being an iterator keeps global state for the functions of this module close the sequence global state state state state none if state state close def nextfile if not state raise runtimeerrorno active input return state nextfile def filename if not state raise runtimeerrorno active input return state filename def lineno if not state raise runtimeerrorno active input return state lineno def filelineno if not state raise runtimeerrorno active input return state filelineno def fileno if not state raise runtimeerrorno active input return state fileno def isfirstline if not state raise runtimeerrorno active input return state isfirstline def isstdin if not state raise runtimeerrorno active input return state isstdin class fileinput def initself filesnone inplacefalse backup moder openhooknone encodingnone errorsnone if isinstancefiles str files files elif isinstancefiles os pathlike files os fspathfiles else if files is none files sys argv1 if not files files else files tuplefiles self files files self inplace inplace self backup backup self savestdout none self output none self filename none self startlineno 0 self filelineno 0 self file none self isstdin false self backupfilename none self encoding encoding self errors errors we can not use io textencoding here because old openhook doesn t take encoding parameter if sys flags warndefaultencoding and b not in mode and encoding is none and openhook is none import warnings warnings warn encoding argument not specified encodingwarning 2 restrict mode argument to reading modes if mode not in r rb raise valueerrorfileinput opening mode must be r or rb self mode mode self writemode mode replace r w if openhook if inplace raise valueerrorfileinput cannot use an opening hook in inplace mode if not callableopenhook raise valueerrorfileinput openhook must be callable self openhook openhook def delself self close def closeself try self nextfile finally self files def enterself return self def exitself type value traceback self close def iterself return self def nextself while true line self readline if line self filelineno 1 return line if not self file raise stopiteration self nextfile repeat with next file def nextfileself savestdout self savestdout self savestdout none if savestdout sys stdout savestdout output self output self output none try if output output close finally file self file self file none try del self readline restore fileinput readline except attributeerror pass try if file and not self isstdin file close finally backupfilename self backupfilename self backupfilename none if backupfilename and not self backup try os unlinkbackupfilename except oserror pass self isstdin false def readlineself while true line self readline if line self filelineno 1 return line if not self file return line self nextfile repeat with next file def readlineself if not self files if b in self mode return b else return self filename self files0 self files self files1 self startlineno self lineno self filelineno 0 self file none self isstdin false self backupfilename 0 encodingwarning is emitted in init already if b not in self mode encoding self encoding or locale else encoding none if self filename self filename stdin if b in self mode self file getattrsys stdin buffer sys stdin else self file sys stdin self isstdin true else if self inplace self backupfilename os fspathself filename self backup or bak try os unlinkself backupfilename except oserror pass the next few lines may raise oserror os renameself filename self backupfilename self file openself backupfilename self mode encodingencoding errorsself errors try perm os fstatself file fileno stmode except oserror self output openself filename self writemode encodingencoding errorsself errors else mode os ocreat os owronly os otrunc if hasattros obinary mode os obinary fd os openself filename mode perm self output os fdopenfd self writemode encodingencoding errorsself errors try os chmodself filename perm except oserror pass self savestdout sys stdout sys stdout self output else this may raise oserror if self openhook custom hooks made previous to python 3 10 didn t have encoding argument if self encoding is none self file self openhookself filename self mode else self file self openhook self filename self mode encodingself encoding errorsself errors else self file openself filename self mode encodingencoding errorsself errors self readline self file readline hide fileinput readline return self readline def filenameself return self filename def linenoself return self startlineno self filelineno def filelinenoself return self filelineno def filenoself if self file try return self file fileno except valueerror return 1 else return 1 def isfirstlineself return self filelineno 1 def isstdinself return self isstdin classgetitem classmethodgenericalias def hookcompressedfilename mode encodingnone errorsnone if encoding is none and b not in mode encodingwarning is emitted in fileinput already encoding locale ext os path splitextfilename1 if ext gz import gzip stream gzip openfilename mode elif ext bz2 import bz2 stream bz2 bz2filefilename mode else return openfilename mode encodingencoding errorserrors gzip and bz2 are binary mode by default if b not in mode stream io textiowrapperstream encodingencoding errorserrors return stream def hookencodedencoding errorsnone def openhookfilename mode return openfilename mode encodingencoding errorserrors return openhook def test import getopt inplace false backup false opts args getopt getoptsys argv1 ib for o a in opts if o i inplace true if o b backup a for line in inputargs inplaceinplace backupbackup if line1 n line line 1 if line1 r line line 1 printd sds s lineno filename filelineno isfirstline and or line printd sd lineno filename filelineno if name main test return an instance of the fileinput class which can be iterated the parameters are passed to the constructor of the fileinput class the returned instance in addition to being an iterator keeps global state for the functions of this module close the sequence close the current file so that the next iteration will read the first line from the next file if any lines not read from the file will not count towards the cumulative line count the filename is not changed until after the first line of the next file has been read before the first line has been read this function has no effect it cannot be used to skip the first file after the last line of the last file has been read this function has no effect return the name of the file currently being read before the first line has been read returns none return the cumulative line number of the line that has just been read before the first line has been read returns 0 after the last line of the last file has been read returns the line number of that line return the line number in the current file before the first line has been read returns 0 after the last line of the last file has been read returns the line number of that line within the file return the file number of the current file when no file is currently opened returns 1 returns true the line just read is the first line of its file otherwise returns false returns true if the last line was read from sys stdin otherwise returns false fileinput files inplace backup mode none openhook none class fileinput is the implementation of the module its methods filename lineno fileline isfirstline isstdin fileno nextfile and close correspond to the functions of the same name in the module in addition it has a readline method which returns the next input line and a __getitem__ method which implements the sequence behavior the sequence must be accessed in strictly sequential order random access and readline cannot be mixed we can not use io text_encoding here because old openhook doesn t take encoding parameter restrict mode argument to reading modes repeat with next file restore fileinput _readline repeat with next file encodingwarning is emitted in __init__ already the next few lines may raise oserror this may raise oserror custom hooks made previous to python 3 10 didn t have encoding argument hide fileinput _readline encodingwarning is emitted in fileinput already gzip and bz2 are binary mode by default
import io import sys, os from types import GenericAlias __all__ = ["input", "close", "nextfile", "filename", "lineno", "filelineno", "fileno", "isfirstline", "isstdin", "FileInput", "hook_compressed", "hook_encoded"] _state = None def input(files=None, inplace=False, backup="", *, mode="r", openhook=None, encoding=None, errors=None): global _state if _state and _state._file: raise RuntimeError("input() already active") _state = FileInput(files, inplace, backup, mode=mode, openhook=openhook, encoding=encoding, errors=errors) return _state def close(): global _state state = _state _state = None if state: state.close() def nextfile(): if not _state: raise RuntimeError("no active input()") return _state.nextfile() def filename(): if not _state: raise RuntimeError("no active input()") return _state.filename() def lineno(): if not _state: raise RuntimeError("no active input()") return _state.lineno() def filelineno(): if not _state: raise RuntimeError("no active input()") return _state.filelineno() def fileno(): if not _state: raise RuntimeError("no active input()") return _state.fileno() def isfirstline(): if not _state: raise RuntimeError("no active input()") return _state.isfirstline() def isstdin(): if not _state: raise RuntimeError("no active input()") return _state.isstdin() class FileInput: def __init__(self, files=None, inplace=False, backup="", *, mode="r", openhook=None, encoding=None, errors=None): if isinstance(files, str): files = (files,) elif isinstance(files, os.PathLike): files = (os.fspath(files), ) else: if files is None: files = sys.argv[1:] if not files: files = ('-',) else: files = tuple(files) self._files = files self._inplace = inplace self._backup = backup self._savestdout = None self._output = None self._filename = None self._startlineno = 0 self._filelineno = 0 self._file = None self._isstdin = False self._backupfilename = None self._encoding = encoding self._errors = errors if (sys.flags.warn_default_encoding and "b" not in mode and encoding is None and openhook is None): import warnings warnings.warn("'encoding' argument not specified.", EncodingWarning, 2) if mode not in ('r', 'rb'): raise ValueError("FileInput opening mode must be 'r' or 'rb'") self._mode = mode self._write_mode = mode.replace('r', 'w') if openhook: if inplace: raise ValueError("FileInput cannot use an opening hook in inplace mode") if not callable(openhook): raise ValueError("FileInput openhook must be callable") self._openhook = openhook def __del__(self): self.close() def close(self): try: self.nextfile() finally: self._files = () def __enter__(self): return self def __exit__(self, type, value, traceback): self.close() def __iter__(self): return self def __next__(self): while True: line = self._readline() if line: self._filelineno += 1 return line if not self._file: raise StopIteration self.nextfile() def nextfile(self): savestdout = self._savestdout self._savestdout = None if savestdout: sys.stdout = savestdout output = self._output self._output = None try: if output: output.close() finally: file = self._file self._file = None try: del self._readline except AttributeError: pass try: if file and not self._isstdin: file.close() finally: backupfilename = self._backupfilename self._backupfilename = None if backupfilename and not self._backup: try: os.unlink(backupfilename) except OSError: pass self._isstdin = False def readline(self): while True: line = self._readline() if line: self._filelineno += 1 return line if not self._file: return line self.nextfile() def _readline(self): if not self._files: if 'b' in self._mode: return b'' else: return '' self._filename = self._files[0] self._files = self._files[1:] self._startlineno = self.lineno() self._filelineno = 0 self._file = None self._isstdin = False self._backupfilename = 0 if "b" not in self._mode: encoding = self._encoding or "locale" else: encoding = None if self._filename == '-': self._filename = '<stdin>' if 'b' in self._mode: self._file = getattr(sys.stdin, 'buffer', sys.stdin) else: self._file = sys.stdin self._isstdin = True else: if self._inplace: self._backupfilename = ( os.fspath(self._filename) + (self._backup or ".bak")) try: os.unlink(self._backupfilename) except OSError: pass os.rename(self._filename, self._backupfilename) self._file = open(self._backupfilename, self._mode, encoding=encoding, errors=self._errors) try: perm = os.fstat(self._file.fileno()).st_mode except OSError: self._output = open(self._filename, self._write_mode, encoding=encoding, errors=self._errors) else: mode = os.O_CREAT | os.O_WRONLY | os.O_TRUNC if hasattr(os, 'O_BINARY'): mode |= os.O_BINARY fd = os.open(self._filename, mode, perm) self._output = os.fdopen(fd, self._write_mode, encoding=encoding, errors=self._errors) try: os.chmod(self._filename, perm) except OSError: pass self._savestdout = sys.stdout sys.stdout = self._output else: if self._openhook: if self._encoding is None: self._file = self._openhook(self._filename, self._mode) else: self._file = self._openhook( self._filename, self._mode, encoding=self._encoding, errors=self._errors) else: self._file = open(self._filename, self._mode, encoding=encoding, errors=self._errors) self._readline = self._file.readline return self._readline() def filename(self): return self._filename def lineno(self): return self._startlineno + self._filelineno def filelineno(self): return self._filelineno def fileno(self): if self._file: try: return self._file.fileno() except ValueError: return -1 else: return -1 def isfirstline(self): return self._filelineno == 1 def isstdin(self): return self._isstdin __class_getitem__ = classmethod(GenericAlias) def hook_compressed(filename, mode, *, encoding=None, errors=None): if encoding is None and "b" not in mode: encoding = "locale" ext = os.path.splitext(filename)[1] if ext == '.gz': import gzip stream = gzip.open(filename, mode) elif ext == '.bz2': import bz2 stream = bz2.BZ2File(filename, mode) else: return open(filename, mode, encoding=encoding, errors=errors) if "b" not in mode: stream = io.TextIOWrapper(stream, encoding=encoding, errors=errors) return stream def hook_encoded(encoding, errors=None): def openhook(filename, mode): return open(filename, mode, encoding=encoding, errors=errors) return openhook def _test(): import getopt inplace = False backup = False opts, args = getopt.getopt(sys.argv[1:], "ib:") for o, a in opts: if o == '-i': inplace = True if o == '-b': backup = a for line in input(args, inplace=inplace, backup=backup): if line[-1:] == '\n': line = line[:-1] if line[-1:] == '\r': line = line[:-1] print("%d: %s[%d]%s %s" % (lineno(), filename(), filelineno(), isfirstline() and "*" or "", line)) print("%d: %s[%d]" % (lineno(), filename(), filelineno())) if __name__ == '__main__': _test()
filename matching with shell patterns fnmatchfilename pattern matches according to the local convention fnmatchcasefilename pattern always takes case in account the functions operate by translating the pattern into a regular expression they cache the compiled regular expressions for speed the function translatepattern returns a regular expression corresponding to pattern it does not compile it test whether filename matches pattern patterns are unix shell style matches everything matches any single character seq matches any character in seq seq matches any char not in seq an initial period in filename is not special both filename and pattern are first casenormalized if the operating system requires it if you don t want this use fnmatchcasefilename pattern construct a list from those elements of the iterable names that match pat result pat os path normcasepat match compilepatternpat if os path is posixpath normcase on posix is nop optimize it away from the loop for name in names if matchname result appendname else for name in names if matchos path normcasename result appendname return result def fnmatchcasename pat match compilepatternpat return matchname is not none def translatepat star object parts translatepat star return jointranslatedpartsparts star def translatepat star questionmark res add res append i n 0 lenpat while i n c pati i i1 if c compress consecutive into one if not res or res1 is not star addstar elif c addquestionmark elif c j i if j n and patj j j1 if j n and patj j j1 while j n and patj j j1 if j n add else stuff pati j if not in stuff stuff stuff replace r else chunks k i2 if pati else i1 while true k pat find k j if k 0 break chunks appendpati k i k1 k k3 chunk pati j if chunk chunks appendchunk else chunks1 remove empty ranges invalid in re for k in rangelenchunks1 0 1 if chunksk11 chunksk0 chunksk1 chunksk1 1 chunksk1 del chunksk escape backslashes and hyphens for set difference hyphens that create ranges shouldn t be escaped stuff joins replace r replace r for s in chunks escape set operations and stuff re subr r 1 stuff i j1 if not stuff empty range never match add elif stuff negated empty range match any character add else if stuff0 stuff stuff1 elif stuff0 in stuff stuff addf stuff else addre escapec assert i n return res def jointranslatedpartsinp star deal with stars res add res append i n 0 leninp fixed pieces at the start while i n and inpi is not star addinpi i 1 now deal with star fixed star fixed for an interior star fixed pairing we want to do a minimal match followed by fixed with no possibility of backtracking atomic groups allow us to spell that directly note people rely on the undocumented ability to join multiple translate results together via to build large regexps matching one of many shell patterns while i n assert inpi is star i 1 if i n add break assert inpi is not star fixed while i n and inpi is not star fixed appendinpi i 1 fixed joinfixed if i n add addfixed else addf fixed assert i n res joinres return fr s resz test whether filename matches pattern patterns are unix shell style matches everything matches any single character seq matches any character in seq seq matches any char not in seq an initial period in filename is not special both filename and pattern are first case normalized if the operating system requires it if you don t want this use fnmatchcase filename pattern construct a list from those elements of the iterable names that match pat normcase on posix is nop optimize it away from the loop test whether filename matches pattern including case this is a version of fnmatch which doesn t case normalize its arguments translate a shell pattern to a regular expression there is no way to quote meta characters compress consecutive into one remove empty ranges invalid in re escape backslashes and hyphens for set difference hyphens that create ranges shouldn t be escaped escape set operations and empty range never match negated empty range match any character deal with stars fixed pieces at the start now deal with star fixed star fixed for an interior star fixed pairing we want to do a minimal match followed by fixed with no possibility of backtracking atomic groups allow us to spell that directly note people rely on the undocumented ability to join multiple translate results together via to build large regexps matching one of many shell patterns
import os import posixpath import re import functools __all__ = ["filter", "fnmatch", "fnmatchcase", "translate"] def fnmatch(name, pat): name = os.path.normcase(name) pat = os.path.normcase(pat) return fnmatchcase(name, pat) @functools.lru_cache(maxsize=32768, typed=True) def _compile_pattern(pat): if isinstance(pat, bytes): pat_str = str(pat, 'ISO-8859-1') res_str = translate(pat_str) res = bytes(res_str, 'ISO-8859-1') else: res = translate(pat) return re.compile(res).match def filter(names, pat): result = [] pat = os.path.normcase(pat) match = _compile_pattern(pat) if os.path is posixpath: for name in names: if match(name): result.append(name) else: for name in names: if match(os.path.normcase(name)): result.append(name) return result def fnmatchcase(name, pat): match = _compile_pattern(pat) return match(name) is not None def translate(pat): STAR = object() parts = _translate(pat, STAR, '.') return _join_translated_parts(parts, STAR) def _translate(pat, STAR, QUESTION_MARK): res = [] add = res.append i, n = 0, len(pat) while i < n: c = pat[i] i = i+1 if c == '*': if (not res) or res[-1] is not STAR: add(STAR) elif c == '?': add(QUESTION_MARK) elif c == '[': j = i if j < n and pat[j] == '!': j = j+1 if j < n and pat[j] == ']': j = j+1 while j < n and pat[j] != ']': j = j+1 if j >= n: add('\\[') else: stuff = pat[i:j] if '-' not in stuff: stuff = stuff.replace('\\', r'\\') else: chunks = [] k = i+2 if pat[i] == '!' else i+1 while True: k = pat.find('-', k, j) if k < 0: break chunks.append(pat[i:k]) i = k+1 k = k+3 chunk = pat[i:j] if chunk: chunks.append(chunk) else: chunks[-1] += '-' for k in range(len(chunks)-1, 0, -1): if chunks[k-1][-1] > chunks[k][0]: chunks[k-1] = chunks[k-1][:-1] + chunks[k][1:] del chunks[k] stuff = '-'.join(s.replace('\\', r'\\').replace('-', r'\-') for s in chunks) stuff = re.sub(r'([&~|])', r'\\\1', stuff) i = j+1 if not stuff: add('(?!)') elif stuff == '!': add('.') else: if stuff[0] == '!': stuff = '^' + stuff[1:] elif stuff[0] in ('^', '['): stuff = '\\' + stuff add(f'[{stuff}]') else: add(re.escape(c)) assert i == n return res def _join_translated_parts(inp, STAR): res = [] add = res.append i, n = 0, len(inp) while i < n and inp[i] is not STAR: add(inp[i]) i += 1 while i < n: assert inp[i] is STAR i += 1 if i == n: add(".*") break assert inp[i] is not STAR fixed = [] while i < n and inp[i] is not STAR: fixed.append(inp[i]) i += 1 fixed = "".join(fixed) if i == n: add(".*") add(fixed) else: add(f"(?>.*?{fixed})") assert i == n res = "".join(res) return fr'(?s:{res})\Z'
originally contributed by sjoerd mullender significantly modified by jeffrey yasskin jyasskin at gmail com fraction infiniteprecision rational numbers from decimal import decimal import functools import math import numbers import operator import re import sys all fraction constants related to the hash implementation hashx is based on the reduction of x modulo the prime pyhashmodulus pyhashmodulus sys hashinfo modulus value to be used for rationals that reduce to infinity modulo pyhashmodulus pyhashinf sys hashinfo inf functools lrucachemaxsize 1 14 def hashalgorithmnumerator denominator to make sure that the hash of a fraction agrees with the hash of a numerically equal integer float or decimal instance we follow the rules for numeric hashes outlined in the documentation see library docs builtin types try dinv powdenominator 1 pyhashmodulus except valueerror valueerror means there is no modular inverse hash pyhashinf else the general algorithm now specifies that the absolute value of the hash is n dinv p where n is self numerator and p is pyhashmodulus that s optimized here in two ways first for a nonnegative int i hashi i p but the int hash implementation doesn t need to divide and is faster than doing p explicitly so we do hashn dinv instead second n is unbounded so its product with dinv may be arbitrarily expensive to compute the final answer is the same if we use the bounded n p instead which can again be done with an int hash call if 0 i p hashi i so this nested hash call wastes a bit of time making a redundant copy when n p but can save an arbitrarily large amount of computation for large n hash hashhashabsnumerator dinv result hash if numerator 0 else hash return 2 if result 1 else result rationalformat re compiler as optional whitespace at the start psign an optional sign then d d lookahead for digit or digit pnumddd numerator possibly empty followed by ss pdenomdd an optional denominator or pdecimalddd an optional fractional part e pexp dd and optional exponent sz and optional whitespace to finish helpers for formatting round a rational number to the nearest multiple of a given power of 10 rounds the rational number nd to the nearest integer multiple of 10exponent rounding to the nearest even integer multiple in the case of a tie returns a pair sign bool significand int representing the rounded value 1sign significand 10exponent if nonegzero is true then the returned sign will always be false when the significand is zero otherwise the sign reflects the sign of the input d must be positive but n and d need not be relatively prime the divmod quotient is correct for roundtiestowardspositiveinfinity in the case of a tie we zero out the least significant bit of q round a rational number to a given number of significant figures rounds the rational number nd to the given number of significant figures using the roundtiestoeven rule and returns a triple sign bool significand int exponent int representing the rounded value 1sign significand 10exponent in the special case where n 0 returns a significand of zero and an exponent of 1 figures for compatibility with formatting otherwise the returned significand satisfies 10figures 1 significand 10figures d must be positive but n and d need not be relatively prime figures must be positive special case for n 0 find integer m satisfying 10m 1 absnd 10m if absnd is a power of 10 either of the two possible values for m is fine round to a multiple of 10m figures the significand we get satisfies 10figures 1 significand 10figures adjust in the case where significand 10figures to ensure that 10figures 1 significand 10figures pattern for matching floatstyle format specifications supports e e f f g g and presentation types a 0 that s not followed by another digit is parsed as a minimum width rather than a zeropad flag re dotall re verbose fullmatch class fractionnumbers rational slots numerator denominator we re immutable so use new not init def newcls numerator0 denominatornone self superfraction cls newcls if denominator is none if typenumerator is int self numerator numerator self denominator 1 return self elif isinstancenumerator numbers rational self numerator numerator numerator self denominator numerator denominator return self elif isinstancenumerator float decimal exact conversion self numerator self denominator numerator asintegerratio return self elif isinstancenumerator str handle construction from strings m rationalformat matchnumerator if m is none raise valueerror invalid literal for fraction r numerator numerator intm group num or 0 denom m group denom if denom denominator intdenom else denominator 1 decimal m group decimal if decimal decimal decimal replace scale 10lendecimal numerator numerator scale intdecimal denominator scale exp m group exp if exp exp intexp if exp 0 numerator 10exp else denominator 10exp if m group sign numerator numerator else raise typeerrorargument should be a string or a rational instance elif typenumerator is int is typedenominator pass very normal case elif isinstancenumerator numbers rational and isinstancedenominator numbers rational numerator denominator numerator numerator denominator denominator denominator numerator numerator denominator else raise typeerrorboth arguments should be rational instances if denominator 0 raise zerodivisionerror fractions 0 numerator g math gcdnumerator denominator if denominator 0 g g numerator g denominator g self numerator numerator self denominator denominator return self classmethod def fromfloatcls f if isinstancef numbers integral return clsf elif not isinstancef float raise typeerrors fromfloat only takes floats not r s cls name f typef name return cls fromcoprimeintsf asintegerratio classmethod def fromdecimalcls dec convert a pair of ints to a rational number for internal use the ratio of integers should be in lowest terms and the denominator should be positive return true if the fraction is an integer return self denominator 1 def asintegerratioself return self numerator self denominator def limitdenominatorself maxdenominator1000000 algorithm notes for any real number x define a best upper approximation to x to be a rational number pq such that 1 pq x and 2 if pq rs x then s q for any rational rs define best lower approximation similarly then it can be proved that a rational number is a best upper or lower approximation to x if and only if it is a convergent or semiconvergent of the unique shortest continued fraction associated to x to find a best rational approximation with denominator m we find the best upper and lower approximations with denominator m and take whichever of these is closer to x in the event of a tie the bound with smaller denominator is chosen if both denominators are equal which can happen only when maxdenominator 1 and self is midway between two integers the lower boundi e the floor of self is taken if maxdenominator 1 raise valueerrormaxdenominator should be at least 1 if self denominator maxdenominator return fractionself p0 q0 p1 q1 0 1 1 0 n d self numerator self denominator while true a nd q2 q0aq1 if q2 maxdenominator break p0 q0 p1 q1 p1 q1 p0ap1 q2 n d d nad k maxdenominatorq0q1 determine which of the candidates p0kp1q0kq1 and p1q1 is closer to self the distance between them is 1q1q0kq1 while the distance from p1q1 to self is dq1self denominator so we need to compare 2q0kq1 with self denominatord if 2dq0kq1 self denominator return fraction fromcoprimeintsp1 q1 else return fraction fromcoprimeintsp0kp1 q0kq1 property def numeratora return a numerator property def denominatora return a denominator def reprself strself if self denominator 1 return strself numerator else return ss self numerator self denominator def formatself formatspec backwards compatiblility with existing formatting validate and parse the format specifier avoid the temptation to guess round to get the digits we need figure out where to place the point and decide whether to use scientific notation pointpos is the relative to the end of the digit string that is it s the number of digits that should follow the point get the suffix the part following the digits if any string of output digits padded sufficiently with zeros on the left so that we ll have at least one digit before the decimal point before padding the output has the form fsignleadingtrailing where leading includes thousands separators if necessary and trailing includes the decimal separator where appropriate do zero padding if required when adding thousands separators they ll be added to the zeropadded portion too so we need to compensate insert thousands separators if required we now have a sign and a body pad with fill character if necessary and return generates forward and reverse operators given a purelyrational operator and a function from the operator module use this like op rop operatorfallbacksjustrationalop operator op in general we want to implement the arithmetic operations so that mixedmode operations either call an implementation whose knew about the types of both arguments or convert both to the nearest built in type and do the operation there in fraction that means that we define add and radd as def addself other both types have numeratorsdenominator attributes so do the operation directly if isinstanceother int fraction return fractionself numerator other denominator other numerator self denominator self denominator other denominator float and complex don t have those operations but we know about those types so special case them elif isinstanceother float return floatself other elif isinstanceother complex return complexself other let the other type take over return notimplemented def raddself other radd handles more types than add because there s nothing left to fall back to if isinstanceother numbers rational return fractionself numerator other denominator other numerator self denominator self denominator other denominator elif isinstanceother real return floatother floatself elif isinstanceother complex return complexother complexself return notimplemented there are 5 different cases for a mixedtype addition on fraction i ll refer to all of the above code that doesn t refer to fraction float or complex as boilerplate r will be an instance of fraction which is a subtype of rational r fraction rational and b b complex the first three involve r b 1 if b fraction int float or complex we handle that specially and all is well 2 if fraction falls back to the boilerplate code and it were to return a value from add we d miss the possibility that b defines a more intelligent radd so the boilerplate should return notimplemented from add in particular we don t handle rational here even though we could get an exact answer in case the other type wants to do something special 3 if b fraction python tries b radd before fraction add this is ok because it was implemented with knowledge of fraction so it can handle those instances before delegating to real or complex the next two situations describe b r we assume that b didn t know about fraction in its implementation and that it uses similar boilerplate code 4 if b rational then radd converts both to the builtin rational type hey look that s us and proceeds 5 otherwise radd tries to find the nearest common base abc and fall back to its builtin type since this class doesn t subclass a concrete type there s no implementation to fall back to so we need to try as hard as possible to return an actual value or the user will get a typeerror includes ints rational arithmetic algorithms knuth taocp volume 2 4 5 1 assume input fractions a and b are normalized 1 consider additionsubtraction let g gcdda db then na nb nadb nbda a b da db dadb nadbg nbdag t dadbg d now if g 1 we re working with smaller integers note that t dag and dbg are pairwise coprime indeed dag and dbg share no common factors they were removed and da is coprime with na since input fractions are normalized hence dag and na are coprime by symmetry dbg and nb are coprime too then gcdt dag gcdnadbg dag 1 gcdt dbg gcdnbdag dbg 1 above allows us optimize reduction of the result to lowest terms indeed g2 gcdt d gcdt dagdbgg gcdt g tg2 tg2 a b dagdbggg2 dagdbg2 is a normalized fraction this is useful because the unnormalized denominator d could be much larger than g we should specialcase g 1 and g2 1 since 60 8 of randomlychosen integers are coprime https en wikipedia orgwikicoprimeintegersprobabilityofcoprimality note that g2 1 always for fractions obtained from floats here g is a power of 2 and the unnormalized numerator t is an odd integer 2 consider multiplication let g1 gcdna db and g2 gcdnb da then nanb nanb nag1nbg2 ab dadb dbda dbg1dag2 note that after divisions we re multiplying smaller integers also the resulting fraction is normalized because each of two factors in the numerator is coprime to each of the two factors in the denominator indeed pick nag1 it s coprime with dag2 because input fractions are normalized it s also coprime with dbg1 because common factors are removed by g1 gcdna db as for additionsubtraction we should specialcase g1 1 and g2 1 for same reason that happens also for multiplying rationals obtained from floats a b na da a numerator a denominator nb db b numerator b denominator g math gcdda db if g 1 return fraction fromcoprimeintsna db da nb da db s da g t na db g nb s g2 math gcdt g if g2 1 return fraction fromcoprimeintst s db return fraction fromcoprimeintst g2 s db g2 add radd operatorfallbacksadd operator add def suba b a b na da a numerator a denominator nb db b numerator b denominator g1 math gcdna db if g1 1 na g1 db g1 g2 math gcdnb da if g2 1 nb g2 da g2 return fraction fromcoprimeintsna nb db da mul rmul operatorfallbacksmul operator mul def diva b same as mul with inversed b a b return a numerator b denominator a denominator b numerator floordiv rfloordiv operatorfallbacksfloordiv operator floordiv def divmoda b a b da db a denominator b denominator return fractiona numerator db b numerator da da db mod rmod operatorfallbacksmod operator mod def powa b if isinstanceb numbers rational if b denominator 1 power b numerator if power 0 return fraction fromcoprimeintsa numerator power a denominator power elif a numerator 0 return fraction fromcoprimeintsa denominator power a numerator power elif a numerator 0 raise zerodivisionerror fractions 0 a denominator power else return fraction fromcoprimeintsa denominator power a numerator power else a fractional power will generally produce an irrational number return floata floatb else return floata b def rpowb a if a is an int keep it that way if possible a coerces a subclass instance to fraction return fraction fromcoprimeintsa numerator a denominator def nega absa return fraction fromcoprimeintsabsa numerator a denominator def inta indexoperator index math trunca if a numerator 0 return a numerator a denominator else return a numerator a denominator def floora math ceila the negations cleverly convince floordiv to return the ceiling return a numerator a denominator def roundself ndigitsnone if ndigits is none d self denominator floor remainder divmodself numerator d if remainder 2 d return floor elif remainder 2 d return floor 1 deal with the half case elif floor 2 0 return floor else return floor 1 shift 10absndigits see operatorfallbacks forward to check that the results of these operations will always be fraction and therefore have round if ndigits 0 return fractionroundself shift shift else return fractionroundself shift shift def hashself a b if typeb is int return a numerator b and a denominator 1 if isinstanceb numbers rational return a numerator b numerator and a denominator b denominator if isinstanceb numbers complex and b imag 0 b b real if isinstanceb float if math isnanb or math isinfb comparisons with an infinity or nan should behave in the same way for any finite a so treat a as zero return 0 0 b else return a a fromfloatb else since a doesn t know how to compare with b let s give b a chance to compare itself with a return notimplemented def richcmpself other op convert other to a rational instance where reasonable if isinstanceother numbers rational return opself numerator other denominator self denominator other numerator if isinstanceother float if math isnanother or math isinfother return op0 0 other else return opself self fromfloatother else return notimplemented def lta b a b return a richcmpb operator gt def lea b a b return a richcmpb operator ge def boola bpo39274 use bool because a numerator 0 can return an object which is not a bool support for pickling copy and deepcopy originally contributed by sjoerd mullender significantly modified by jeffrey yasskin jyasskin at gmail com fraction infinite precision rational numbers constants related to the hash implementation hash x is based on the reduction of x modulo the prime _pyhash_modulus value to be used for rationals that reduce to infinity modulo _pyhash_modulus to make sure that the hash of a fraction agrees with the hash of a numerically equal integer float or decimal instance we follow the rules for numeric hashes outlined in the documentation see library docs built in types valueerror means there is no modular inverse the general algorithm now specifies that the absolute value of the hash is n dinv p where n is self _numerator and p is _pyhash_modulus that s optimized here in two ways first for a non negative int i hash i i p but the int hash implementation doesn t need to divide and is faster than doing p explicitly so we do hash n dinv instead second n is unbounded so its product with dinv may be arbitrarily expensive to compute the final answer is the same if we use the bounded n p instead which can again be done with an int hash call if 0 i p hash i i so this nested hash call wastes a bit of time making a redundant copy when n p but can save an arbitrarily large amount of computation for large n a s optional whitespace at the start p sign an optional sign then d d lookahead for digit or digit p num d d _ d numerator possibly empty followed by s s p denom d _ d an optional denominator or p decimal d d _ d an optional fractional part e p exp d _ d and optional exponent s z and optional whitespace to finish helpers for formatting round a rational number to the nearest multiple of a given power of 10 rounds the rational number n d to the nearest integer multiple of 10 exponent rounding to the nearest even integer multiple in the case of a tie returns a pair sign bool significand int representing the rounded value 1 sign significand 10 exponent if no_neg_zero is true then the returned sign will always be false when the significand is zero otherwise the sign reflects the sign of the input d must be positive but n and d need not be relatively prime the divmod quotient is correct for round ties towards positive infinity in the case of a tie we zero out the least significant bit of q round a rational number to a given number of significant figures rounds the rational number n d to the given number of significant figures using the round ties to even rule and returns a triple sign bool significand int exponent int representing the rounded value 1 sign significand 10 exponent in the special case where n 0 returns a significand of zero and an exponent of 1 figures for compatibility with formatting otherwise the returned significand satisfies 10 figures 1 significand 10 figures d must be positive but n and d need not be relatively prime figures must be positive special case for n 0 find integer m satisfying 10 m 1 abs n d 10 m if abs n d is a power of 10 either of the two possible values for m is fine round to a multiple of 10 m figures the significand we get satisfies 10 figures 1 significand 10 figures adjust in the case where significand 10 figures to ensure that 10 figures 1 significand 10 figures pattern for matching float style format specifications supports e e f f g g and presentation types p fill p align p sign p no_neg_zero z p alt a 0 that s not followed by another digit is parsed as a minimum width rather than a zeropad flag p zeropad 0 0 9 p minimumwidth 0 1 9 0 9 p thousands_sep _ p precision 0 1 9 0 9 p presentation_type eeffgg this class implements rational numbers in the two argument form of the constructor fraction 8 6 will produce a rational number equivalent to 4 3 both arguments must be rational the numerator defaults to 0 and the denominator defaults to 1 so that fraction 3 3 and fraction 0 fractions can also be constructed from numeric strings similar to those accepted by the float constructor for example 2 3 or 1e10 strings of the form 123 456 float and decimal instances other rational instances including integers we re immutable so use __new__ not __init__ constructs a rational takes a string like 3 2 or 1 5 another rational instance a numerator denominator pair or a float examples fraction 10 8 fraction 5 4 fraction fraction 1 7 5 fraction 1 35 fraction fraction 1 7 fraction 2 3 fraction 3 14 fraction 314 fraction 314 1 fraction 35 4 fraction 35 4 fraction 3 1415 conversion from numeric string fraction 6283 2000 fraction 47e 2 string may include a decimal exponent fraction 47 100 fraction 1 47 direct construction from float exact conversion fraction 6620291452234629 4503599627370496 fraction 2 25 fraction 9 4 fraction decimal 1 47 fraction 147 100 exact conversion handle construction from strings very normal case converts a finite float to a rational number exactly beware that fraction from_float 0 3 fraction 3 10 converts a finite decimal instance to a rational number exactly convert a pair of ints to a rational number for internal use the ratio of integers should be in lowest terms and the denominator should be positive return true if the fraction is an integer return a pair of integers whose ratio is equal to the original fraction the ratio is in lowest terms and has a positive denominator closest fraction to self with denominator at most max_denominator fraction 3 141592653589793 limit_denominator 10 fraction 22 7 fraction 3 141592653589793 limit_denominator 100 fraction 311 99 fraction 4321 8765 limit_denominator 10000 fraction 4321 8765 algorithm notes for any real number x define a best upper approximation to x to be a rational number p q such that 1 p q x and 2 if p q r s x then s q for any rational r s define best lower approximation similarly then it can be proved that a rational number is a best upper or lower approximation to x if and only if it is a convergent or semiconvergent of the unique shortest continued fraction associated to x to find a best rational approximation with denominator m we find the best upper and lower approximations with denominator m and take whichever of these is closer to x in the event of a tie the bound with smaller denominator is chosen if both denominators are equal which can happen only when max_denominator 1 and self is midway between two integers the lower bound i e the floor of self is taken determine which of the candidates p0 k p1 q0 k q1 and p1 q1 is closer to self the distance between them is 1 q1 q0 k q1 while the distance from p1 q1 to self is d q1 self _denominator so we need to compare 2 q0 k q1 with self _denominator d repr self str self format this fraction according to the given format specification backwards compatiblility with existing formatting validate and parse the format specifier avoid the temptation to guess round to get the digits we need figure out where to place the point and decide whether to use scientific notation point_pos is the relative to the _end_ of the digit string that is it s the number of digits that should follow the point presentation_type in eegg get the suffix the part following the digits if any string of output digits padded sufficiently with zeros on the left so that we ll have at least one digit before the decimal point before padding the output has the form f sign leading trailing where leading includes thousands separators if necessary and trailing includes the decimal separator where appropriate do zero padding if required when adding thousands separators they ll be added to the zero padded portion too so we need to compensate insert thousands separators if required we now have a sign and a body pad with fill character if necessary and return align generates forward and reverse operators given a purely rational operator and a function from the operator module use this like __op__ __rop__ _operator_fallbacks just_rational_op operator op in general we want to implement the arithmetic operations so that mixed mode operations either call an implementation whose knew about the types of both arguments or convert both to the nearest built in type and do the operation there in fraction that means that we define __add__ and __radd__ as def __add__ self other both types have numerators denominator attributes so do the operation directly if isinstance other int fraction return fraction self numerator other denominator other numerator self denominator self denominator other denominator float and complex don t have those operations but we know about those types so special case them elif isinstance other float return float self other elif isinstance other complex return complex self other let the other type take over return notimplemented def __radd__ self other radd handles more types than add because there s nothing left to fall back to if isinstance other numbers rational return fraction self numerator other denominator other numerator self denominator self denominator other denominator elif isinstance other real return float other float self elif isinstance other complex return complex other complex self return notimplemented there are 5 different cases for a mixed type addition on fraction i ll refer to all of the above code that doesn t refer to fraction float or complex as boilerplate r will be an instance of fraction which is a subtype of rational r fraction rational and b b complex the first three involve r b 1 if b fraction int float or complex we handle that specially and all is well 2 if fraction falls back to the boilerplate code and it were to return a value from __add__ we d miss the possibility that b defines a more intelligent __radd__ so the boilerplate should return notimplemented from __add__ in particular we don t handle rational here even though we could get an exact answer in case the other type wants to do something special 3 if b fraction python tries b __radd__ before fraction __add__ this is ok because it was implemented with knowledge of fraction so it can handle those instances before delegating to real or complex the next two situations describe b r we assume that b didn t know about fraction in its implementation and that it uses similar boilerplate code 4 if b rational then __radd_ converts both to the builtin rational type hey look that s us and proceeds 5 otherwise __radd__ tries to find the nearest common base abc and fall back to its builtin type since this class doesn t subclass a concrete type there s no implementation to fall back to so we need to try as hard as possible to return an actual value or the user will get a typeerror includes ints rational arithmetic algorithms knuth taocp volume 2 4 5 1 assume input fractions a and b are normalized 1 consider addition subtraction let g gcd da db then na nb na db nb da a b da db da db na db g nb da g t da db g d now if g 1 we re working with smaller integers note that t da g and db g are pairwise coprime indeed da g and db g share no common factors they were removed and da is coprime with na since input fractions are normalized hence da g and na are coprime by symmetry db g and nb are coprime too then gcd t da g gcd na db g da g 1 gcd t db g gcd nb da g db g 1 above allows us optimize reduction of the result to lowest terms indeed g2 gcd t d gcd t da g db g g gcd t g t g2 t g2 a b da g db g g g2 da g db g2 is a normalized fraction this is useful because the unnormalized denominator d could be much larger than g we should special case g 1 and g2 1 since 60 8 of randomly chosen integers are coprime https en wikipedia org wiki coprime_integers probability_of_coprimality note that g2 1 always for fractions obtained from floats here g is a power of 2 and the unnormalized numerator t is an odd integer 2 consider multiplication let g1 gcd na db and g2 gcd nb da then na nb na nb na g1 nb g2 a b da db db da db g1 da g2 note that after divisions we re multiplying smaller integers also the resulting fraction is normalized because each of two factors in the numerator is coprime to each of the two factors in the denominator indeed pick na g1 it s coprime with da g2 because input fractions are normalized it s also coprime with db g1 because common factors are removed by g1 gcd na db as for addition subtraction we should special case g1 1 and g2 1 for same reason that happens also for multiplying rationals obtained from floats a b a b a b a b same as _mul with inversed b a b a b a b a b a b if b is not an integer the result will be a float or complex since roots are generally irrational if b is an integer the result will be rational a fractional power will generally produce an irrational number a b if a is an int keep it that way if possible a coerces a subclass instance to fraction a abs a int a math trunc a math floor a math ceil a the negations cleverly convince floordiv to return the ceiling round self ndigits rounds half toward even deal with the half case see _operator_fallbacks forward to check that the results of these operations will always be fraction and therefore have round hash self a b comparisons with an infinity or nan should behave in the same way for any finite a so treat a as zero since a doesn t know how to compare with b let s give b a chance to compare itself with a helper for comparison operators for internal use only implement comparison between a rational instance self and either another rational instance or a float other if other is not a rational instance or a float return notimplemented op should be one of the six standard comparison operators convert other to a rational instance where reasonable a b a b a b a b a 0 bpo 39274 use bool because a _numerator 0 can return an object which is not a bool support for pickling copy and deepcopy i m immutable therefore i am my own clone my components are also immutable
from decimal import Decimal import functools import math import numbers import operator import re import sys __all__ = ['Fraction'] _PyHASH_MODULUS = sys.hash_info.modulus _PyHASH_INF = sys.hash_info.inf @functools.lru_cache(maxsize = 1 << 14) def _hash_algorithm(numerator, denominator): try: dinv = pow(denominator, -1, _PyHASH_MODULUS) except ValueError: hash_ = _PyHASH_INF else: hash_ = hash(hash(abs(numerator)) * dinv) result = hash_ if numerator >= 0 else -hash_ return -2 if result == -1 else result _RATIONAL_FORMAT = re.compile(r, re.VERBOSE | re.IGNORECASE) def _round_to_exponent(n, d, exponent, no_neg_zero=False): if exponent >= 0: d *= 10**exponent else: n *= 10**-exponent q, r = divmod(n + (d >> 1), d) if r == 0 and d & 1 == 0: q &= -2 sign = q < 0 if no_neg_zero else n < 0 return sign, abs(q) def _round_to_figures(n, d, figures): if n == 0: return False, 0, 1 - figures str_n, str_d = str(abs(n)), str(d) m = len(str_n) - len(str_d) + (str_d <= str_n) exponent = m - figures sign, significand = _round_to_exponent(n, d, exponent) if len(str(significand)) == figures + 1: significand //= 10 exponent += 1 return sign, significand, exponent _FLOAT_FORMAT_SPECIFICATION_MATCHER = re.compile(r, re.DOTALL | re.VERBOSE).fullmatch class Fraction(numbers.Rational): __slots__ = ('_numerator', '_denominator') def __new__(cls, numerator=0, denominator=None): self = super(Fraction, cls).__new__(cls) if denominator is None: if type(numerator) is int: self._numerator = numerator self._denominator = 1 return self elif isinstance(numerator, numbers.Rational): self._numerator = numerator.numerator self._denominator = numerator.denominator return self elif isinstance(numerator, (float, Decimal)): self._numerator, self._denominator = numerator.as_integer_ratio() return self elif isinstance(numerator, str): m = _RATIONAL_FORMAT.match(numerator) if m is None: raise ValueError('Invalid literal for Fraction: %r' % numerator) numerator = int(m.group('num') or '0') denom = m.group('denom') if denom: denominator = int(denom) else: denominator = 1 decimal = m.group('decimal') if decimal: decimal = decimal.replace('_', '') scale = 10**len(decimal) numerator = numerator * scale + int(decimal) denominator *= scale exp = m.group('exp') if exp: exp = int(exp) if exp >= 0: numerator *= 10**exp else: denominator *= 10**-exp if m.group('sign') == '-': numerator = -numerator else: raise TypeError("argument should be a string " "or a Rational instance") elif type(numerator) is int is type(denominator): pass elif (isinstance(numerator, numbers.Rational) and isinstance(denominator, numbers.Rational)): numerator, denominator = ( numerator.numerator * denominator.denominator, denominator.numerator * numerator.denominator ) else: raise TypeError("both arguments should be " "Rational instances") if denominator == 0: raise ZeroDivisionError('Fraction(%s, 0)' % numerator) g = math.gcd(numerator, denominator) if denominator < 0: g = -g numerator //= g denominator //= g self._numerator = numerator self._denominator = denominator return self @classmethod def from_float(cls, f): if isinstance(f, numbers.Integral): return cls(f) elif not isinstance(f, float): raise TypeError("%s.from_float() only takes floats, not %r (%s)" % (cls.__name__, f, type(f).__name__)) return cls._from_coprime_ints(*f.as_integer_ratio()) @classmethod def from_decimal(cls, dec): from decimal import Decimal if isinstance(dec, numbers.Integral): dec = Decimal(int(dec)) elif not isinstance(dec, Decimal): raise TypeError( "%s.from_decimal() only takes Decimals, not %r (%s)" % (cls.__name__, dec, type(dec).__name__)) return cls._from_coprime_ints(*dec.as_integer_ratio()) @classmethod def _from_coprime_ints(cls, numerator, denominator, /): obj = super(Fraction, cls).__new__(cls) obj._numerator = numerator obj._denominator = denominator return obj def is_integer(self): return self._denominator == 1 def as_integer_ratio(self): return (self._numerator, self._denominator) def limit_denominator(self, max_denominator=1000000): if max_denominator < 1: raise ValueError("max_denominator should be at least 1") if self._denominator <= max_denominator: return Fraction(self) p0, q0, p1, q1 = 0, 1, 1, 0 n, d = self._numerator, self._denominator while True: a = n//d q2 = q0+a*q1 if q2 > max_denominator: break p0, q0, p1, q1 = p1, q1, p0+a*p1, q2 n, d = d, n-a*d k = (max_denominator-q0)//q1 if 2*d*(q0+k*q1) <= self._denominator: return Fraction._from_coprime_ints(p1, q1) else: return Fraction._from_coprime_ints(p0+k*p1, q0+k*q1) @property def numerator(a): return a._numerator @property def denominator(a): return a._denominator def __repr__(self): return '%s(%s, %s)' % (self.__class__.__name__, self._numerator, self._denominator) def __str__(self): if self._denominator == 1: return str(self._numerator) else: return '%s/%s' % (self._numerator, self._denominator) def __format__(self, format_spec, /): if not format_spec: return str(self) match = _FLOAT_FORMAT_SPECIFICATION_MATCHER(format_spec) if match is None: raise ValueError( f"Invalid format specifier {format_spec!r} " f"for object of type {type(self).__name__!r}" ) elif match["align"] is not None and match["zeropad"] is not None: raise ValueError( f"Invalid format specifier {format_spec!r} " f"for object of type {type(self).__name__!r}; " "can't use explicit alignment when zero-padding" ) fill = match["fill"] or " " align = match["align"] or ">" pos_sign = "" if match["sign"] == "-" else match["sign"] no_neg_zero = bool(match["no_neg_zero"]) alternate_form = bool(match["alt"]) zeropad = bool(match["zeropad"]) minimumwidth = int(match["minimumwidth"] or "0") thousands_sep = match["thousands_sep"] precision = int(match["precision"] or "6") presentation_type = match["presentation_type"] trim_zeros = presentation_type in "gG" and not alternate_form trim_point = not alternate_form exponent_indicator = "E" if presentation_type in "EFG" else "e" if presentation_type in "fF%": exponent = -precision if presentation_type == "%": exponent -= 2 negative, significand = _round_to_exponent( self._numerator, self._denominator, exponent, no_neg_zero) scientific = False point_pos = precision else: figures = ( max(precision, 1) if presentation_type in "gG" else precision + 1 ) negative, significand, exponent = _round_to_figures( self._numerator, self._denominator, figures) scientific = ( presentation_type in "eE" or exponent > 0 or exponent + figures <= -4 ) point_pos = figures - 1 if scientific else -exponent if presentation_type == "%": suffix = "%" elif scientific: suffix = f"{exponent_indicator}{exponent + point_pos:+03d}" else: suffix = "" digits = f"{significand:0{point_pos + 1}d}" sign = "-" if negative else pos_sign leading = digits[: len(digits) - point_pos] frac_part = digits[len(digits) - point_pos :] if trim_zeros: frac_part = frac_part.rstrip("0") separator = "" if trim_point and not frac_part else "." trailing = separator + frac_part + suffix if zeropad: min_leading = minimumwidth - len(sign) - len(trailing) leading = leading.zfill( 3 * min_leading // 4 + 1 if thousands_sep else min_leading ) if thousands_sep: first_pos = 1 + (len(leading) - 1) % 3 leading = leading[:first_pos] + "".join( thousands_sep + leading[pos : pos + 3] for pos in range(first_pos, len(leading), 3) ) body = leading + trailing padding = fill * (minimumwidth - len(sign) - len(body)) if align == ">": return padding + sign + body elif align == "<": return sign + body + padding elif align == "^": half = len(padding) // 2 return padding[:half] + sign + body + padding[half:] else: return sign + padding + body def _operator_fallbacks(monomorphic_operator, fallback_operator): def forward(a, b): if isinstance(b, Fraction): return monomorphic_operator(a, b) elif isinstance(b, int): return monomorphic_operator(a, Fraction(b)) elif isinstance(b, float): return fallback_operator(float(a), b) elif isinstance(b, complex): return fallback_operator(complex(a), b) else: return NotImplemented forward.__name__ = '__' + fallback_operator.__name__ + '__' forward.__doc__ = monomorphic_operator.__doc__ def reverse(b, a): if isinstance(a, numbers.Rational): return monomorphic_operator(Fraction(a), b) elif isinstance(a, numbers.Real): return fallback_operator(float(a), float(b)) elif isinstance(a, numbers.Complex): return fallback_operator(complex(a), complex(b)) else: return NotImplemented reverse.__name__ = '__r' + fallback_operator.__name__ + '__' reverse.__doc__ = monomorphic_operator.__doc__ return forward, reverse def _add(a, b): na, da = a._numerator, a._denominator nb, db = b._numerator, b._denominator g = math.gcd(da, db) if g == 1: return Fraction._from_coprime_ints(na * db + da * nb, da * db) s = da // g t = na * (db // g) + nb * s g2 = math.gcd(t, g) if g2 == 1: return Fraction._from_coprime_ints(t, s * db) return Fraction._from_coprime_ints(t // g2, s * (db // g2)) __add__, __radd__ = _operator_fallbacks(_add, operator.add) def _sub(a, b): na, da = a._numerator, a._denominator nb, db = b._numerator, b._denominator g = math.gcd(da, db) if g == 1: return Fraction._from_coprime_ints(na * db - da * nb, da * db) s = da // g t = na * (db // g) - nb * s g2 = math.gcd(t, g) if g2 == 1: return Fraction._from_coprime_ints(t, s * db) return Fraction._from_coprime_ints(t // g2, s * (db // g2)) __sub__, __rsub__ = _operator_fallbacks(_sub, operator.sub) def _mul(a, b): na, da = a._numerator, a._denominator nb, db = b._numerator, b._denominator g1 = math.gcd(na, db) if g1 > 1: na //= g1 db //= g1 g2 = math.gcd(nb, da) if g2 > 1: nb //= g2 da //= g2 return Fraction._from_coprime_ints(na * nb, db * da) __mul__, __rmul__ = _operator_fallbacks(_mul, operator.mul) def _div(a, b): nb, db = b._numerator, b._denominator if nb == 0: raise ZeroDivisionError('Fraction(%s, 0)' % db) na, da = a._numerator, a._denominator g1 = math.gcd(na, nb) if g1 > 1: na //= g1 nb //= g1 g2 = math.gcd(db, da) if g2 > 1: da //= g2 db //= g2 n, d = na * db, nb * da if d < 0: n, d = -n, -d return Fraction._from_coprime_ints(n, d) __truediv__, __rtruediv__ = _operator_fallbacks(_div, operator.truediv) def _floordiv(a, b): return (a.numerator * b.denominator) // (a.denominator * b.numerator) __floordiv__, __rfloordiv__ = _operator_fallbacks(_floordiv, operator.floordiv) def _divmod(a, b): da, db = a.denominator, b.denominator div, n_mod = divmod(a.numerator * db, da * b.numerator) return div, Fraction(n_mod, da * db) __divmod__, __rdivmod__ = _operator_fallbacks(_divmod, divmod) def _mod(a, b): da, db = a.denominator, b.denominator return Fraction((a.numerator * db) % (b.numerator * da), da * db) __mod__, __rmod__ = _operator_fallbacks(_mod, operator.mod) def __pow__(a, b): if isinstance(b, numbers.Rational): if b.denominator == 1: power = b.numerator if power >= 0: return Fraction._from_coprime_ints(a._numerator ** power, a._denominator ** power) elif a._numerator > 0: return Fraction._from_coprime_ints(a._denominator ** -power, a._numerator ** -power) elif a._numerator == 0: raise ZeroDivisionError('Fraction(%s, 0)' % a._denominator ** -power) else: return Fraction._from_coprime_ints((-a._denominator) ** -power, (-a._numerator) ** -power) else: return float(a) ** float(b) else: return float(a) ** b def __rpow__(b, a): if b._denominator == 1 and b._numerator >= 0: return a ** b._numerator if isinstance(a, numbers.Rational): return Fraction(a.numerator, a.denominator) ** b if b._denominator == 1: return a ** b._numerator return a ** float(b) def __pos__(a): return Fraction._from_coprime_ints(a._numerator, a._denominator) def __neg__(a): return Fraction._from_coprime_ints(-a._numerator, a._denominator) def __abs__(a): return Fraction._from_coprime_ints(abs(a._numerator), a._denominator) def __int__(a, _index=operator.index): if a._numerator < 0: return _index(-(-a._numerator // a._denominator)) else: return _index(a._numerator // a._denominator) def __trunc__(a): if a._numerator < 0: return -(-a._numerator // a._denominator) else: return a._numerator // a._denominator def __floor__(a): return a._numerator // a._denominator def __ceil__(a): return -(-a._numerator // a._denominator) def __round__(self, ndigits=None): if ndigits is None: d = self._denominator floor, remainder = divmod(self._numerator, d) if remainder * 2 < d: return floor elif remainder * 2 > d: return floor + 1 elif floor % 2 == 0: return floor else: return floor + 1 shift = 10**abs(ndigits) if ndigits > 0: return Fraction(round(self * shift), shift) else: return Fraction(round(self / shift) * shift) def __hash__(self): return _hash_algorithm(self._numerator, self._denominator) def __eq__(a, b): if type(b) is int: return a._numerator == b and a._denominator == 1 if isinstance(b, numbers.Rational): return (a._numerator == b.numerator and a._denominator == b.denominator) if isinstance(b, numbers.Complex) and b.imag == 0: b = b.real if isinstance(b, float): if math.isnan(b) or math.isinf(b): return 0.0 == b else: return a == a.from_float(b) else: return NotImplemented def _richcmp(self, other, op): if isinstance(other, numbers.Rational): return op(self._numerator * other.denominator, self._denominator * other.numerator) if isinstance(other, float): if math.isnan(other) or math.isinf(other): return op(0.0, other) else: return op(self, self.from_float(other)) else: return NotImplemented def __lt__(a, b): return a._richcmp(b, operator.lt) def __gt__(a, b): return a._richcmp(b, operator.gt) def __le__(a, b): return a._richcmp(b, operator.le) def __ge__(a, b): return a._richcmp(b, operator.ge) def __bool__(a): return bool(a._numerator) def __reduce__(self): return (self.__class__, (self._numerator, self._denominator)) def __copy__(self): if type(self) == Fraction: return self return self.__class__(self._numerator, self._denominator) def __deepcopy__(self, memo): if type(self) == Fraction: return self return self.__class__(self._numerator, self._denominator)
an ftp client class and some helper functions based on rfc 959 file transfer protocol ftp by j postel and j reynolds example from ftplib import ftp ftp ftp ftp python org connect to host default port ftp login default i e user anonymous passwd anonymous 230 guest login ok access restrictions apply ftp retrlines list list directory contents total 9 drwxrxrx 8 root wheel 1024 jan 3 1994 drwxrxrx 8 root wheel 1024 jan 3 1994 drwxrxrx 2 root wheel 1024 jan 3 1994 bin drwxrxrx 2 root wheel 1024 jan 3 1994 etc dwxrwxrx 2 ftp wheel 1024 sep 5 13 43 incoming drwxrxrx 2 root wheel 1024 nov 17 1993 lib drwxrxrx 6 1094 wheel 1024 sep 13 19 07 pub drwxrxrx 3 root wheel 1024 jan 3 1994 usr rwrr 1 root root 312 aug 1 1994 welcome msg 226 transfer complete ftp quit 221 goodbye a nice test that reveals some of the network dialogue would be python ftplib py d localhost l p l changes and improvements suggested by steve majewski modified by jack to work on the mac modified by siebren to support docstrings and pasv modified by phil schwartz to add storbinary and storlines callbacks modified by giampaolo rodola to add tls support magic number from socket h the standard ftp server control port the sizehint parameter passed to readline calls exception raised when an error or invalid response is received all exceptions hopefully that may be raised here and that aren t always programming errors on our side line terminators we always output crlf but accept any of crlf cr lf the class itself an ftp client class to create a connection call the class using these arguments host user passwd acct timeout sourceaddress encoding the first four arguments are all strings and have default value the parameter timeout must be numeric and defaults to none if not passed meaning that no timeout will be set on any ftp sockets if a timeout is passed then this is now the default timeout for all ftp socket operations for this instance the last parameter is the encoding of filenames which defaults to utf8 then use self connect with optional host and port argument to download a file use ftp retrlines retr filename or ftp retrbinary with slightly different arguments to upload a file use ftp storlines or ftp storbinary which have an open file as argument see their definitions below for details the downloadupload functions first issue appropriate type and port or pasv commands disables https bugs python orgissue43285 security if set to true initialization method called by class instantiation initialize host to localhost port to standard ftp port optional arguments are host for connect and user passwd acct for login context management protocol try to quit if active connect to host arguments are host hostname to connect to string default previous host port port to connect to integer default previous port timeout the timeout to set against the ftp sockets sourceaddress a 2tuple host port for the socket to bind to as its source address before connecting get the welcome message from the server this is read and squirreled away by connect if self debugging print welcome self sanitizeself welcome return self welcome def setdebuglevelself level use passive or active mode for data transfers with a false argument use the normal port mode with a true argument use the pasv command self passiveserver val internal sanitize a string for printing def sanitizeself s if s 5 in pass pass i lens rstrip rn s s 5 i5 si return reprs internal send one line to the server appending crlf def putlineself line if r in line or n in line raise valueerror an illegal newline character should not be contained sys auditftplib sendcmd self line line line crlf if self debugging 1 print put self sanitizeline self sock sendallline encodeself encoding internal send one command to the server through putline def putcmdself line if self debugging print cmd self sanitizeline self putlineline internal return one line from the server stripping crlf raise eoferror if the connection is closed def getlineself line self file readlineself maxline 1 if lenline self maxline raise errorgot more than d bytes self maxline if self debugging 1 print get self sanitizeline if not line raise eoferror if line2 crlf line line 2 elif line1 in crlf line line 1 return line internal get a response from the server which may possibly consist of multiple lines return a single string with no trailing crlf if the response consists of multiple lines these are separated by n characters in the string def getmultilineself line self getline if line3 4 code line 3 while 1 nextline self getline line line n nextline if nextline 3 code and nextline3 4 break return line internal get a response from the server raise various errors if the response indicates an error def getrespself resp self getmultiline if self debugging print resp self sanitizeresp self lastresp resp 3 c resp 1 if c in 1 2 3 return resp if c 4 raise errortempresp if c 5 raise errorpermresp raise errorprotoresp def voidrespself abort a file transfer uses outofband data this does not follow the procedure from the rfc to send telnet ip and synch that doesn t seem to work with the servers i ve tried instead just send the abor command as oob data line b abor bcrlf if self debugging 1 print put urgent self sanitizeline self sock sendallline msgoob resp self getmultiline if resp 3 not in 426 225 226 raise errorprotoresp return resp def sendcmdself cmd send a command and expect a response beginning with 2 self putcmdcmd return self voidresp def sendportself host port hbytes host split pbytes reprport256 reprport256 bytes hbytes pbytes cmd port joinbytes return self voidcmdcmd def sendeprtself host port create a new socket and send a port command for it sock socket createserver 0 familyself af backlog1 port sock getsockname1 get proper port host self sock getsockname0 get proper host if self af socket afinet resp self sendporthost port else resp self sendeprthost port if self timeout is not globaldefaulttimeout sock settimeoutself timeout return sock def makepasvself initiate a transfer over the data connection if the transfer is active send a port command and the transfer command and accept the connection if the server is passive send a pasv command connect to it and start the transfer command either way return the socket for the connection and the expected size of the transfer the expected size may be none if it could not be determined optional rest argument can be a string that is sent as the argument to a rest command this is essentially a server marker used to tell the server to skip over any data up to the given marker some servers apparently send a 200 reply to a list or stor command before the 150 reply and way before the 226 reply this seems to be in violation of the protocol which only allows 1xx or error messages for list so we just discard this response see above this is conditional in case we received a 125 like ntransfercmd but returns only the socket return self ntransfercmdcmd rest0 def loginself user passwd acct if there is no anonymous ftp password specified then we ll just use anonymous we don t send any other thing because we want to remain anonymous we want to stop spam we don t want to let ftp sites to discriminate by the user host or country retrieve data in binary mode a new port is created for you args cmd a retr command callback a single parameter callable to be called on each block of data read blocksize the maximum number of bytes to read from the socket at one time default 8192 rest passed to transfercmd default none returns the response code shutdown ssl layer retrieve data in line mode a new port is created for you args cmd a retr list or nlst command callback an optional single parameter callable that is called for each line with the trailing crlf stripped default printline returns the response code shutdown ssl layer store a file in binary mode a new port is created for you args cmd a stor command fp a filelike object with a readnumbytes method blocksize the maximum data size to read from fp and send over the connection at once default 8192 callback an optional single parameter callable that is called on each block of data after it is sent default none rest passed to transfercmd default none returns the response code shutdown ssl layer store a file in line mode a new port is created for you args cmd a stor command fp a filelike object with a readline method callback an optional single parameter callable that is called on each line after it is sent default none returns the response code shutdown ssl layer send new account name cmd acct password return self voidcmdcmd def nlstself args list a directory in long form by default list current directory to stdout optional last argument is callback function all nonempty arguments before it are concatenated to the list command this should only be used for a pathname cmd list func none if args1 and not isinstanceargs1 str args func args 1 args1 for arg in args if arg cmd cmd arg self retrlinescmd func def mlsdself path facts if facts self sendcmdopts mlst joinfacts if path cmd mlsd s path else cmd mlsd lines self retrlinescmd lines append for line in lines factsfound name line rstripcrlf partition entry for fact in factsfound 1 split key value fact partition entrykey lower value yield name entry def renameself fromname toname delete a file resp self sendcmd dele filename if resp 3 in 250 200 return resp else raise errorreplyresp def cwdself dirname retrieve the size of a file the size command is defined in rfc3659 resp self sendcmd size filename if resp 3 213 s resp3 strip return ints def mkdself dirname fix around noncompliant implementations such as iis shipped with windows server 2003 remove a directory return self voidcmd rmd dirname def pwdself fix around noncompliant implementations such as iis shipped with windows server 2003 quit and close the connection resp self voidcmd quit self close return resp def closeself a ftp subclass which adds tls support to ftp as described in rfc4217 connect as usual to port 21 implicitly securing the ftp control connection before authenticating securing the data connection requires user to explicitly ask for it by calling protp method usage example from ftplib import ftptls ftps ftptls ftp python org ftps login login anonymously previously securing control channel 230 guest login ok access restrictions apply ftps protp switch to secure data connection 200 protection level set to p ftps retrlines list list directory content securely total 9 drwxrxrx 8 root wheel 1024 jan 3 1994 drwxrxrx 8 root wheel 1024 jan 3 1994 drwxrxrx 2 root wheel 1024 jan 3 1994 bin drwxrxrx 2 root wheel 1024 jan 3 1994 etc dwxrwxrx 2 ftp wheel 1024 sep 5 13 43 incoming drwxrxrx 2 root wheel 1024 nov 17 1993 lib drwxrxrx 6 1094 wheel 1024 sep 13 19 07 pub drwxrxrx 3 root wheel 1024 jan 3 1994 usr rwrr 1 root root 312 aug 1 1994 welcome msg 226 transfer complete ftps quit 221 goodbye set up secure control connection by using tlsssl if isinstanceself sock ssl sslsocket raise valueerroralready using tls if self context protocol ssl protocoltls resp self voidcmd auth tls else resp self voidcmd auth ssl self sock self context wrapsocketself sock serverhostnameself host self file self sock makefilemode r encodingself encoding return resp def cccself set up secure data connection prot defines whether or not the data channel is to be protected though rfc2228 defines four possible protection levels rfc4217 only recommends two clear and private clear prot c means that no security is to be used on the datachannel private prot p means that the datachannel should be protected by tls pbsz command must still be issued but must have a parameter of 0 to indicate that no buffering is taking place and the data connection should not be encapsulated self voidcmd pbsz 0 resp self voidcmd prot p self protp true return resp def protcself overridden ftp methods overridden as we can t pass msgoob flag to sendall parse the 150 response for a retr request returns the expected transfer size or none size is not guaranteed to be present in the 150 message parse the 227 response for a pasv request raises errorproto if it does not contain h1 h2 h3 h4 p1 p2 return host addr as numbers port tuple if resp 3 227 raise errorreplyresp global 227re if 227re is none import re 227re re compiler d d d d d d re ascii m 227re searchresp if not m raise errorprotoresp numbers m groups host joinnumbers 4 port intnumbers4 8 intnumbers5 return host port def parse229resp peer parse the 257 response for a mkd or pwd request this is a response to a mkd or pwd request a directory name returns the directoryname in the 257 reply if resp 3 257 raise errorreplyresp if resp3 5 return not compliant to rfc 959 but unix ftpd does this dirname i 5 n lenresp while i n c respi i i1 if c if i n or respi break i i1 dirname dirname c return dirname def printlineline copy file from one ftpinstance to another if not targetname targetname sourcename type type type source voidcmdtype target voidcmdtype sourcehost sourceport parse227source sendcmd pasv target sendportsourcehost sourceport rfc 959 the user must listen before sending the transfer request so stor before retr because here the target is a user treply target sendcmd stor targetname if treply 3 not in 125 150 raise errorproto rfc 959 sreply source sendcmd retr sourcename if sreply 3 not in 125 150 raise errorproto rfc 959 source voidresp target voidresp def test if lensys argv 2 printtest doc sys exit0 import netrc debugging 0 rcfile none while sys argv1 d debugging debugging1 del sys argv1 if sys argv1 2 r get name of alternate netrc file rcfile sys argv12 del sys argv1 host sys argv1 ftp ftphost ftp setdebugleveldebugging userid passwd acct try netrcobj netrc netrcrcfile except oserror if rcfile is not none sys stderr writecould not open account file using anonymous login else try userid acct passwd netrcobj authenticatorshost except keyerror no account for host sys stderr write no account using anonymous login ftp loginuserid passwd acct for file in sys argv2 if file 2 l ftp dirfile2 elif file 2 d cmd cwd if file2 cmd cmd file2 resp ftp sendcmdcmd elif file p ftp setpasvnot ftp passiveserver else ftp retrbinary retr file sys stdout write 1024 ftp quit if name main test changes and improvements suggested by steve majewski modified by jack to work on the mac modified by siebren to support docstrings and pasv modified by phil schwartz to add storbinary and storlines callbacks modified by giampaolo rodola to add tls support magic number from socket h process data out of band the standard ftp server control port the sizehint parameter passed to readline calls exception raised when an error or invalid response is received unexpected 123 xx reply 4xx errors 5xx errors response does not begin with 1 5 all exceptions hopefully that may be raised here and that aren t always programming errors on our side line terminators we always output crlf but accept any of crlf cr lf the class itself an ftp client class to create a connection call the class using these arguments host user passwd acct timeout source_address encoding the first four arguments are all strings and have default value the parameter timeout must be numeric and defaults to none if not passed meaning that no timeout will be set on any ftp socket s if a timeout is passed then this is now the default timeout for all ftp socket operations for this instance the last parameter is the encoding of filenames which defaults to utf 8 then use self connect with optional host and port argument to download a file use ftp retrlines retr filename or ftp retrbinary with slightly different arguments to upload a file use ftp storlines or ftp storbinary which have an open file as argument see their definitions below for details the download upload functions first issue appropriate type and port or pasv commands disables https bugs python org issue43285 security if set to true initialization method called by class instantiation initialize host to localhost port to standard ftp port optional arguments are host for connect and user passwd acct for login context management protocol try to quit if active connect to host arguments are host hostname to connect to string default previous host port port to connect to integer default previous port timeout the timeout to set against the ftp socket s source_address a 2 tuple host port for the socket to bind to as its source address before connecting get the welcome message from the server this is read and squirreled away by connect set the debugging level the required argument level means 0 no debugging output default 1 print commands and responses but not body text etc 2 also print raw lines read and sent before stripping cr lf use passive or active mode for data transfers with a false argument use the normal port mode with a true argument use the pasv command internal sanitize a string for printing internal send one line to the server appending crlf internal send one command to the server through putline internal return one line from the server stripping crlf raise eoferror if the connection is closed internal get a response from the server which may possibly consist of multiple lines return a single string with no trailing crlf if the response consists of multiple lines these are separated by n characters in the string internal get a response from the server raise various errors if the response indicates an error expect a response beginning with 2 abort a file transfer uses out of band data this does not follow the procedure from the rfc to send telnet ip and synch that doesn t seem to work with the servers i ve tried instead just send the abor command as oob data send a command and return the response send a command and expect a response beginning with 2 send a port command with the current host and the given port number send an eprt command with the current host and the given port number create a new socket and send a port command for it get proper port get proper host internal does the pasv or epsv handshake address port initiate a transfer over the data connection if the transfer is active send a port command and the transfer command and accept the connection if the server is passive send a pasv command connect to it and start the transfer command either way return the socket for the connection and the expected size of the transfer the expected size may be none if it could not be determined optional rest argument can be a string that is sent as the argument to a rest command this is essentially a server marker used to tell the server to skip over any data up to the given marker some servers apparently send a 200 reply to a list or stor command before the 150 reply and way before the 226 reply this seems to be in violation of the protocol which only allows 1xx or error messages for list so we just discard this response see above this is conditional in case we received a 125 like ntransfercmd but returns only the socket login default anonymous if there is no anonymous ftp password specified then we ll just use anonymous we don t send any other thing because we want to remain anonymous we want to stop spam we don t want to let ftp sites to discriminate by the user host or country retrieve data in binary mode a new port is created for you args cmd a retr command callback a single parameter callable to be called on each block of data read blocksize the maximum number of bytes to read from the socket at one time default 8192 rest passed to transfercmd default none returns the response code shutdown ssl layer retrieve data in line mode a new port is created for you args cmd a retr list or nlst command callback an optional single parameter callable that is called for each line with the trailing crlf stripped default print_line returns the response code shutdown ssl layer store a file in binary mode a new port is created for you args cmd a stor command fp a file like object with a read num_bytes method blocksize the maximum data size to read from fp and send over the connection at once default 8192 callback an optional single parameter callable that is called on each block of data after it is sent default none rest passed to transfercmd default none returns the response code shutdown ssl layer store a file in line mode a new port is created for you args cmd a stor command fp a file like object with a readline method callback an optional single parameter callable that is called on each line after it is sent default none returns the response code shutdown ssl layer send new account name return a list of files in a given directory default the current list a directory in long form by default list current directory to stdout optional last argument is callback function all non empty arguments before it are concatenated to the list command this should only be used for a pathname list a directory in a standardized format by using mlsd command rfc 3659 if path is omitted the current directory is assumed facts is a list of strings representing the type of information desired e g type size perm return a generator object yielding a tuple of two elements for every file found in path first element is the file name the second one is a dictionary including a variable number of facts depending on the server and whether facts argument has been provided rename a file delete a file change to a directory does nothing but could return error retrieve the size of a file the size command is defined in rfc 3659 make a directory return its full pathname fix around non compliant implementations such as iis shipped with windows server 2003 remove a directory return current working directory fix around non compliant implementations such as iis shipped with windows server 2003 quit and close the connection close the connection without assuming anything about it a ftp subclass which adds tls support to ftp as described in rfc 4217 connect as usual to port 21 implicitly securing the ftp control connection before authenticating securing the data connection requires user to explicitly ask for it by calling prot_p method usage example from ftplib import ftp_tls ftps ftp_tls ftp python org ftps login login anonymously previously securing control channel 230 guest login ok access restrictions apply ftps prot_p switch to secure data connection 200 protection level set to p ftps retrlines list list directory content securely total 9 drwxr xr x 8 root wheel 1024 jan 3 1994 drwxr xr x 8 root wheel 1024 jan 3 1994 drwxr xr x 2 root wheel 1024 jan 3 1994 bin drwxr xr x 2 root wheel 1024 jan 3 1994 etc d wxrwxr x 2 ftp wheel 1024 sep 5 13 43 incoming drwxr xr x 2 root wheel 1024 nov 17 1993 lib drwxr xr x 6 1094 wheel 1024 sep 13 19 07 pub drwxr xr x 3 root wheel 1024 jan 3 1994 usr rw r r 1 root root 312 aug 1 1994 welcome msg 226 transfer complete ftps quit 221 goodbye set up secure control connection by using tls ssl switch back to a clear text control connection set up secure data connection prot defines whether or not the data channel is to be protected though rfc 2228 defines four possible protection levels rfc 4217 only recommends two clear and private clear prot c means that no security is to be used on the data channel private prot p means that the data channel should be protected by tls pbsz command must still be issued but must have a parameter of 0 to indicate that no buffering is taking place and the data connection should not be encapsulated set up clear text data connection overridden ftp methods overridden as we can t pass msg_oob flag to sendall parse the 150 response for a retr request returns the expected transfer size or none size is not guaranteed to be present in the 150 message parse the 227 response for a pasv request raises error_proto if it does not contain h1 h2 h3 h4 p1 p2 return host addr as numbers port tuple parse the 229 response for an epsv request raises error_proto if it does not contain port return host addr as numbers port tuple should contain port parse the 257 response for a mkd or pwd request this is a response to a mkd or pwd request a directory name returns the directoryname in the 257 reply not compliant to rfc 959 but unix ftpd does this default retrlines callback to print a line copy file from one ftp instance to another rfc 959 the user must listen before sending the transfer request so stor before retr because here the target is a user rfc 959 rfc 959 test program usage ftp d r file host l dir d dir p file d dir l list p password get name of alternate netrc file no account for host
import sys import socket from socket import _GLOBAL_DEFAULT_TIMEOUT __all__ = ["FTP", "error_reply", "error_temp", "error_perm", "error_proto", "all_errors"] MSG_OOB = 0x1 FTP_PORT = 21 MAXLINE = 8192 class Error(Exception): pass class error_reply(Error): pass class error_temp(Error): pass class error_perm(Error): pass class error_proto(Error): pass all_errors = (Error, OSError, EOFError) CRLF = '\r\n' B_CRLF = b'\r\n' class FTP: debugging = 0 host = '' port = FTP_PORT maxline = MAXLINE sock = None file = None welcome = None passiveserver = True trust_server_pasv_ipv4_address = False def __init__(self, host='', user='', passwd='', acct='', timeout=_GLOBAL_DEFAULT_TIMEOUT, source_address=None, *, encoding='utf-8'): self.encoding = encoding self.source_address = source_address self.timeout = timeout if host: self.connect(host) if user: self.login(user, passwd, acct) def __enter__(self): return self def __exit__(self, *args): if self.sock is not None: try: self.quit() except (OSError, EOFError): pass finally: if self.sock is not None: self.close() def connect(self, host='', port=0, timeout=-999, source_address=None): if host != '': self.host = host if port > 0: self.port = port if timeout != -999: self.timeout = timeout if self.timeout is not None and not self.timeout: raise ValueError('Non-blocking socket (timeout=0) is not supported') if source_address is not None: self.source_address = source_address sys.audit("ftplib.connect", self, self.host, self.port) self.sock = socket.create_connection((self.host, self.port), self.timeout, source_address=self.source_address) self.af = self.sock.family self.file = self.sock.makefile('r', encoding=self.encoding) self.welcome = self.getresp() return self.welcome def getwelcome(self): if self.debugging: print('*welcome*', self.sanitize(self.welcome)) return self.welcome def set_debuglevel(self, level): self.debugging = level debug = set_debuglevel def set_pasv(self, val): self.passiveserver = val def sanitize(self, s): if s[:5] in {'pass ', 'PASS '}: i = len(s.rstrip('\r\n')) s = s[:5] + '*'*(i-5) + s[i:] return repr(s) def putline(self, line): if '\r' in line or '\n' in line: raise ValueError('an illegal newline character should not be contained') sys.audit("ftplib.sendcmd", self, line) line = line + CRLF if self.debugging > 1: print('*put*', self.sanitize(line)) self.sock.sendall(line.encode(self.encoding)) def putcmd(self, line): if self.debugging: print('*cmd*', self.sanitize(line)) self.putline(line) def getline(self): line = self.file.readline(self.maxline + 1) if len(line) > self.maxline: raise Error("got more than %d bytes" % self.maxline) if self.debugging > 1: print('*get*', self.sanitize(line)) if not line: raise EOFError if line[-2:] == CRLF: line = line[:-2] elif line[-1:] in CRLF: line = line[:-1] return line def getmultiline(self): line = self.getline() if line[3:4] == '-': code = line[:3] while 1: nextline = self.getline() line = line + ('\n' + nextline) if nextline[:3] == code and \ nextline[3:4] != '-': break return line def getresp(self): resp = self.getmultiline() if self.debugging: print('*resp*', self.sanitize(resp)) self.lastresp = resp[:3] c = resp[:1] if c in {'1', '2', '3'}: return resp if c == '4': raise error_temp(resp) if c == '5': raise error_perm(resp) raise error_proto(resp) def voidresp(self): resp = self.getresp() if resp[:1] != '2': raise error_reply(resp) return resp def abort(self): line = b'ABOR' + B_CRLF if self.debugging > 1: print('*put urgent*', self.sanitize(line)) self.sock.sendall(line, MSG_OOB) resp = self.getmultiline() if resp[:3] not in {'426', '225', '226'}: raise error_proto(resp) return resp def sendcmd(self, cmd): self.putcmd(cmd) return self.getresp() def voidcmd(self, cmd): self.putcmd(cmd) return self.voidresp() def sendport(self, host, port): hbytes = host.split('.') pbytes = [repr(port//256), repr(port%256)] bytes = hbytes + pbytes cmd = 'PORT ' + ','.join(bytes) return self.voidcmd(cmd) def sendeprt(self, host, port): af = 0 if self.af == socket.AF_INET: af = 1 if self.af == socket.AF_INET6: af = 2 if af == 0: raise error_proto('unsupported address family') fields = ['', repr(af), host, repr(port), ''] cmd = 'EPRT ' + '|'.join(fields) return self.voidcmd(cmd) def makeport(self): sock = socket.create_server(("", 0), family=self.af, backlog=1) port = sock.getsockname()[1] host = self.sock.getsockname()[0] if self.af == socket.AF_INET: resp = self.sendport(host, port) else: resp = self.sendeprt(host, port) if self.timeout is not _GLOBAL_DEFAULT_TIMEOUT: sock.settimeout(self.timeout) return sock def makepasv(self): if self.af == socket.AF_INET: untrusted_host, port = parse227(self.sendcmd('PASV')) if self.trust_server_pasv_ipv4_address: host = untrusted_host else: host = self.sock.getpeername()[0] else: host, port = parse229(self.sendcmd('EPSV'), self.sock.getpeername()) return host, port def ntransfercmd(self, cmd, rest=None): size = None if self.passiveserver: host, port = self.makepasv() conn = socket.create_connection((host, port), self.timeout, source_address=self.source_address) try: if rest is not None: self.sendcmd("REST %s" % rest) resp = self.sendcmd(cmd) if resp[0] == '2': resp = self.getresp() if resp[0] != '1': raise error_reply(resp) except: conn.close() raise else: with self.makeport() as sock: if rest is not None: self.sendcmd("REST %s" % rest) resp = self.sendcmd(cmd) if resp[0] == '2': resp = self.getresp() if resp[0] != '1': raise error_reply(resp) conn, sockaddr = sock.accept() if self.timeout is not _GLOBAL_DEFAULT_TIMEOUT: conn.settimeout(self.timeout) if resp[:3] == '150': size = parse150(resp) return conn, size def transfercmd(self, cmd, rest=None): return self.ntransfercmd(cmd, rest)[0] def login(self, user = '', passwd = '', acct = ''): if not user: user = 'anonymous' if not passwd: passwd = '' if not acct: acct = '' if user == 'anonymous' and passwd in {'', '-'}: passwd = passwd + 'anonymous@' resp = self.sendcmd('USER ' + user) if resp[0] == '3': resp = self.sendcmd('PASS ' + passwd) if resp[0] == '3': resp = self.sendcmd('ACCT ' + acct) if resp[0] != '2': raise error_reply(resp) return resp def retrbinary(self, cmd, callback, blocksize=8192, rest=None): self.voidcmd('TYPE I') with self.transfercmd(cmd, rest) as conn: while data := conn.recv(blocksize): callback(data) if _SSLSocket is not None and isinstance(conn, _SSLSocket): conn.unwrap() return self.voidresp() def retrlines(self, cmd, callback = None): if callback is None: callback = print_line resp = self.sendcmd('TYPE A') with self.transfercmd(cmd) as conn, \ conn.makefile('r', encoding=self.encoding) as fp: while 1: line = fp.readline(self.maxline + 1) if len(line) > self.maxline: raise Error("got more than %d bytes" % self.maxline) if self.debugging > 2: print('*retr*', repr(line)) if not line: break if line[-2:] == CRLF: line = line[:-2] elif line[-1:] == '\n': line = line[:-1] callback(line) if _SSLSocket is not None and isinstance(conn, _SSLSocket): conn.unwrap() return self.voidresp() def storbinary(self, cmd, fp, blocksize=8192, callback=None, rest=None): self.voidcmd('TYPE I') with self.transfercmd(cmd, rest) as conn: while buf := fp.read(blocksize): conn.sendall(buf) if callback: callback(buf) if _SSLSocket is not None and isinstance(conn, _SSLSocket): conn.unwrap() return self.voidresp() def storlines(self, cmd, fp, callback=None): self.voidcmd('TYPE A') with self.transfercmd(cmd) as conn: while 1: buf = fp.readline(self.maxline + 1) if len(buf) > self.maxline: raise Error("got more than %d bytes" % self.maxline) if not buf: break if buf[-2:] != B_CRLF: if buf[-1] in B_CRLF: buf = buf[:-1] buf = buf + B_CRLF conn.sendall(buf) if callback: callback(buf) if _SSLSocket is not None and isinstance(conn, _SSLSocket): conn.unwrap() return self.voidresp() def acct(self, password): cmd = 'ACCT ' + password return self.voidcmd(cmd) def nlst(self, *args): cmd = 'NLST' for arg in args: cmd = cmd + (' ' + arg) files = [] self.retrlines(cmd, files.append) return files def dir(self, *args): cmd = 'LIST' func = None if args[-1:] and not isinstance(args[-1], str): args, func = args[:-1], args[-1] for arg in args: if arg: cmd = cmd + (' ' + arg) self.retrlines(cmd, func) def mlsd(self, path="", facts=[]): if facts: self.sendcmd("OPTS MLST " + ";".join(facts) + ";") if path: cmd = "MLSD %s" % path else: cmd = "MLSD" lines = [] self.retrlines(cmd, lines.append) for line in lines: facts_found, _, name = line.rstrip(CRLF).partition(' ') entry = {} for fact in facts_found[:-1].split(";"): key, _, value = fact.partition("=") entry[key.lower()] = value yield (name, entry) def rename(self, fromname, toname): resp = self.sendcmd('RNFR ' + fromname) if resp[0] != '3': raise error_reply(resp) return self.voidcmd('RNTO ' + toname) def delete(self, filename): resp = self.sendcmd('DELE ' + filename) if resp[:3] in {'250', '200'}: return resp else: raise error_reply(resp) def cwd(self, dirname): if dirname == '..': try: return self.voidcmd('CDUP') except error_perm as msg: if msg.args[0][:3] != '500': raise elif dirname == '': dirname = '.' cmd = 'CWD ' + dirname return self.voidcmd(cmd) def size(self, filename): resp = self.sendcmd('SIZE ' + filename) if resp[:3] == '213': s = resp[3:].strip() return int(s) def mkd(self, dirname): resp = self.voidcmd('MKD ' + dirname) if not resp.startswith('257'): return '' return parse257(resp) def rmd(self, dirname): return self.voidcmd('RMD ' + dirname) def pwd(self): resp = self.voidcmd('PWD') if not resp.startswith('257'): return '' return parse257(resp) def quit(self): resp = self.voidcmd('QUIT') self.close() return resp def close(self): try: file = self.file self.file = None if file is not None: file.close() finally: sock = self.sock self.sock = None if sock is not None: sock.close() try: import ssl except ImportError: _SSLSocket = None else: _SSLSocket = ssl.SSLSocket class FTP_TLS(FTP): def __init__(self, host='', user='', passwd='', acct='', *, context=None, timeout=_GLOBAL_DEFAULT_TIMEOUT, source_address=None, encoding='utf-8'): if context is None: context = ssl._create_stdlib_context() self.context = context self._prot_p = False super().__init__(host, user, passwd, acct, timeout, source_address, encoding=encoding) def login(self, user='', passwd='', acct='', secure=True): if secure and not isinstance(self.sock, ssl.SSLSocket): self.auth() return super().login(user, passwd, acct) def auth(self): if isinstance(self.sock, ssl.SSLSocket): raise ValueError("Already using TLS") if self.context.protocol >= ssl.PROTOCOL_TLS: resp = self.voidcmd('AUTH TLS') else: resp = self.voidcmd('AUTH SSL') self.sock = self.context.wrap_socket(self.sock, server_hostname=self.host) self.file = self.sock.makefile(mode='r', encoding=self.encoding) return resp def ccc(self): if not isinstance(self.sock, ssl.SSLSocket): raise ValueError("not using TLS") resp = self.voidcmd('CCC') self.sock = self.sock.unwrap() return resp def prot_p(self): self.voidcmd('PBSZ 0') resp = self.voidcmd('PROT P') self._prot_p = True return resp def prot_c(self): resp = self.voidcmd('PROT C') self._prot_p = False return resp def ntransfercmd(self, cmd, rest=None): conn, size = super().ntransfercmd(cmd, rest) if self._prot_p: conn = self.context.wrap_socket(conn, server_hostname=self.host) return conn, size def abort(self): line = b'ABOR' + B_CRLF self.sock.sendall(line) resp = self.getmultiline() if resp[:3] not in {'426', '225', '226'}: raise error_proto(resp) return resp __all__.append('FTP_TLS') all_errors = (Error, OSError, EOFError, ssl.SSLError) _150_re = None def parse150(resp): if resp[:3] != '150': raise error_reply(resp) global _150_re if _150_re is None: import re _150_re = re.compile( r"150 .* \((\d+) bytes\)", re.IGNORECASE | re.ASCII) m = _150_re.match(resp) if not m: return None return int(m.group(1)) _227_re = None def parse227(resp): if resp[:3] != '227': raise error_reply(resp) global _227_re if _227_re is None: import re _227_re = re.compile(r'(\d+),(\d+),(\d+),(\d+),(\d+),(\d+)', re.ASCII) m = _227_re.search(resp) if not m: raise error_proto(resp) numbers = m.groups() host = '.'.join(numbers[:4]) port = (int(numbers[4]) << 8) + int(numbers[5]) return host, port def parse229(resp, peer): if resp[:3] != '229': raise error_reply(resp) left = resp.find('(') if left < 0: raise error_proto(resp) right = resp.find(')', left + 1) if right < 0: raise error_proto(resp) if resp[left + 1] != resp[right - 1]: raise error_proto(resp) parts = resp[left + 1:right].split(resp[left+1]) if len(parts) != 5: raise error_proto(resp) host = peer[0] port = int(parts[3]) return host, port def parse257(resp): if resp[:3] != '257': raise error_reply(resp) if resp[3:5] != ' "': return '' dirname = '' i = 5 n = len(resp) while i < n: c = resp[i] i = i+1 if c == '"': if i >= n or resp[i] != '"': break i = i+1 dirname = dirname + c return dirname def print_line(line): print(line) def ftpcp(source, sourcename, target, targetname = '', type = 'I'): if not targetname: targetname = sourcename type = 'TYPE ' + type source.voidcmd(type) target.voidcmd(type) sourcehost, sourceport = parse227(source.sendcmd('PASV')) target.sendport(sourcehost, sourceport) treply = target.sendcmd('STOR ' + targetname) if treply[:3] not in {'125', '150'}: raise error_proto sreply = source.sendcmd('RETR ' + sourcename) if sreply[:3] not in {'125', '150'}: raise error_proto source.voidresp() target.voidresp() def test(): if len(sys.argv) < 2: print(test.__doc__) sys.exit(0) import netrc debugging = 0 rcfile = None while sys.argv[1] == '-d': debugging = debugging+1 del sys.argv[1] if sys.argv[1][:2] == '-r': rcfile = sys.argv[1][2:] del sys.argv[1] host = sys.argv[1] ftp = FTP(host) ftp.set_debuglevel(debugging) userid = passwd = acct = '' try: netrcobj = netrc.netrc(rcfile) except OSError: if rcfile is not None: sys.stderr.write("Could not open account file" " -- using anonymous login.") else: try: userid, acct, passwd = netrcobj.authenticators(host) except KeyError: sys.stderr.write( "No account -- using anonymous login.") ftp.login(userid, passwd, acct) for file in sys.argv[2:]: if file[:2] == '-l': ftp.dir(file[2:]) elif file[:2] == '-d': cmd = 'CWD' if file[2:]: cmd = cmd + ' ' + file[2:] resp = ftp.sendcmd(cmd) elif file == '-p': ftp.set_pasv(not ftp.passiveserver) else: ftp.retrbinary('RETR ' + file, \ sys.stdout.write, 1024) ftp.quit() if __name__ == '__main__': test()
functools py tools for working with functions and callable objects python module wrapper for functools c module to allow utilities written in python to be added to the functools module written by nick coghlan ncoghlan at gmail com raymond hettinger python at rcn com and ukasz langa lukasz at langa pl c 20062013 python software foundation see c source code for functools credits import types weakref deferred to singledispatch avoid importing types so we can speedup import time updatewrapper and wraps decorator updatewrapper and wraps are tools to help write wrapper functions that can handle naive introspection update a wrapper function to look like the wrapped function wrapper is the function to be updated wrapped is the original function assigned is a tuple naming the attributes assigned directly from the wrapped function to the wrapper function defaults to functools wrapperassignments updated is a tuple naming the attributes of the wrapper that are updated with the corresponding attribute from the wrapped function defaults to functools wrapperupdates issue 17482 set wrapped last so we don t inadvertently copy it from the wrapped function when updating dict return the wrapper so this can be used as a decorator via partial decorator factory to apply updatewrapper to a wrapper function returns a decorator that invokes updatewrapper with the decorated function as the wrapper argument and the arguments to wraps as the remaining arguments default arguments are as for updatewrapper this is a convenience function to simplify applying partial to updatewrapper totalordering class decorator the total ordering functions all invoke the root magic method directly rather than using the corresponding operator this avoids possible infinite recursion that could occur when the operator dispatch logic detects a notimplemented result and then calls a reflected method class decorator that fills in missing ordering methods find userdefined comparisons not those inherited from object roots op for op in convert if getattrcls op none is not getattrobject op none if not roots raise valueerror must define at least one ordering operation root maxroots prefer lt to le to gt to ge for opname opfunc in convertroot if opname not in roots opfunc name opname setattrcls opname opfunc return cls cmptokey function converter def cmptokeymycmp reduce sequence to a single item reducefunction iterable initial value apply a function of two arguments cumulatively to the items of a sequence or iterable from left to right so as to reduce the iterable to a single value for example reducelambda x y xy 1 2 3 4 5 calculates 12345 if initial is present it is placed before the items of the iterable in the calculation and serves as a default when the iterable is empty partial argument application purely functional no descriptor behaviour new function with partial application of the given arguments and keywords descriptor version method descriptor with partial application of the given arguments and keywords supports wrapping existing descriptors and handles nondescriptor callables as instance methods func could be a descriptor like classmethod which isn t callable so we can t inherit from partial it verifies func is callable flattening is mandatory in order to place clsself before all other arguments it s also more efficient since only one function will be called assume get returning something new indicates the creation of an appropriate callable if the underlying descriptor didn t do anything treat this like an instance method helper functions lru cache function decorator this class guarantees that hash will be called no more than once per element this is important because the lrucache will hash the key multiple times on a cache miss make a cache key from optionally typed positional and keyword arguments the key is constructed in a way that is flat as possible rather than as a nested structure that would take more memory if there is only a single argument and its data type is known to cache its hash value then that argument is returned without a wrapper this saves space and improves lookup speed all of code below relies on kwds preserving the order input by the user formerly we sorted the kwds before looping the new way is much faster however it means that fx1 y2 will now be treated as a distinct call from fy2 x1 which will be cached separately leastrecentlyused cache decorator if maxsize is set to none the lru features are disabled and the cache can grow without bound if typed is true arguments of different types will be cached separately for example fdecimal decimal3 0 and f3 0 will be treated as distinct calls with distinct results some types such as str and int may be cached separately even when typed is false arguments to the cached function must be hashable view the cache statistics named tuple hits misses maxsize currsize with f cacheinfo clear the cache and statistics with f cacheclear access the underlying function with f wrapped see https en wikipedia orgwikicachereplacementpoliciesleastrecentlyusedlru users should only access the lrucache through its public api cacheinfo cacheclear and f wrapped the internals of the lrucache are encapsulated for thread safety and to allow the implementation to change including a possible c version negative maxsize is treated as 0 the userfunction was passed in directly via the maxsize argument constants shared by all lru cache instances no caching just a statistics update simple caching without ordering or size limit size limited caching that tracks accesses by recency move the link to the front of the circular queue getting here means that this same key was added to the cache while the lock was released since the link update is already done we need only return the computed result and update the count of misses use the old root to store the new key and result empty the oldest link and make it the new root keep a reference to the old key and old result to prevent their ref counts from going to zero during the update that will prevent potentially arbitrary object cleanup code i e del from running while we re still adjusting the links now update the cache dictionary save the potentially reentrant cachekey assignment for last after the root and links have been put in a consistent state put result in a new link at the front of the queue use the cachelen bound method instead of the len function which could potentially be wrapped in an lrucache itself report cache statistics with lock return cacheinfohits misses maxsize cachelen def cacheclear cache simplified access to the infinity cache singledispatch singledispatch generic function decorator merges mros in sequences to a single mro using the c3 algorithm adapted from https www python orgdownloadreleases2 3mro remove the chosen candidate computes the method resolution order using extended c3 linearization if no abcs are given the algorithm works exactly like the builtin c3 linearization used for method resolution if given abcs is a list of abstract base classes that should be inserted into the resulting mro unrelated abcs are ignored and don t end up in the result the algorithm inserts abcs where their functionality is introduced i e issubclasscls abc returns true for the class itself but returns false for all its direct base classes implicit abcs for a given class either registered or inferred from the presence of a special method like len are inserted directly after the last abc explicitly listed in the mro of said class if two implicit abcs end up next to each other in the resulting mro their ordering depends on the order of types in abcs if cls is the class that introduces behaviour described by an abc base insert said abc to its mro calculates the method resolution order for a given class cls includes relevant abstract base classes with their respective bases from the types iterable uses a modified c3 linearization algorithm remove entries which are already present in the mro or unrelated remove entries which are strict bases of other entries they will end up in the mro anyway subclasses of the abcs in types which are also implemented by cls can be used to stabilize abc ordering favor subclasses with the biggest number of useful bases returns the best matching implementation from registry for type cls where there is no registered implementation for a specific type its method resolution order is used to find a more generic implementation note if registry does not contain an implementation for the base object type this function may return none if match is an implicit abc but there is another unrelated equally matching implicit abc refuse the temptation to guess singledispatch generic function decorator transforms a function into a generic function which can have different behaviours depending upon the type of its first argument the decorated function acts as the default implementation and additional implementations can be registered using the register attribute of the generic function there are many programs that use functools without singledispatch so we tradeoff making singledispatch marginally slower for the benefit of making startup of such applications slightly faster genericfunc dispatchcls function implementation runs the dispatch algorithm to return the best available implementation for the given cls registered on genericfunc genericfunc registercls func func registers a new implementation for the given cls on a genericfunc only import typing if annotation parsing is necessary descriptor version singledispatch generic method descriptor supports wrapping existing descriptors and handles nondescriptor callables as instance methods genericmethod registercls func func registers a new implementation for the given cls on a genericmethod cachedproperty property result cached as instance attribute python module wrapper for _functools c module to allow utilities written in python to be added to the functools module written by nick coghlan ncoghlan at gmail com raymond hettinger python at rcn com and łukasz langa lukasz at langa pl c 2006 2013 python software foundation see c source code for _functools credits import types weakref deferred to single_dispatch avoid importing types so we can speedup import time update_wrapper and wraps decorator update_wrapper and wraps are tools to help write wrapper functions that can handle naive introspection update a wrapper function to look like the wrapped function wrapper is the function to be updated wrapped is the original function assigned is a tuple naming the attributes assigned directly from the wrapped function to the wrapper function defaults to functools wrapper_assignments updated is a tuple naming the attributes of the wrapper that are updated with the corresponding attribute from the wrapped function defaults to functools wrapper_updates issue 17482 set __wrapped__ last so we don t inadvertently copy it from the wrapped function when updating __dict__ return the wrapper so this can be used as a decorator via partial decorator factory to apply update_wrapper to a wrapper function returns a decorator that invokes update_wrapper with the decorated function as the wrapper argument and the arguments to wraps as the remaining arguments default arguments are as for update_wrapper this is a convenience function to simplify applying partial to update_wrapper total_ordering class decorator the total ordering functions all invoke the root magic method directly rather than using the corresponding operator this avoids possible infinite recursion that could occur when the operator dispatch logic detects a notimplemented result and then calls a reflected method class decorator that fills in missing ordering methods find user defined comparisons not those inherited from object prefer __lt__ to __le__ to __gt__ to __ge__ cmp_to_key function converter convert a cmp function into a key function reduce sequence to a single item reduce function iterable initial value apply a function of two arguments cumulatively to the items of a sequence or iterable from left to right so as to reduce the iterable to a single value for example reduce lambda x y x y 1 2 3 4 5 calculates 1 2 3 4 5 if initial is present it is placed before the items of the iterable in the calculation and serves as a default when the iterable is empty partial argument application purely functional no descriptor behaviour new function with partial application of the given arguments and keywords just in case it s a subclass xxx does it need to be exactly dict descriptor version method descriptor with partial application of the given arguments and keywords supports wrapping existing descriptors and handles non descriptor callables as instance methods func could be a descriptor like classmethod which isn t callable so we can t inherit from partial it verifies func is callable flattening is mandatory in order to place cls self before all other arguments it s also more efficient since only one function will be called assume __get__ returning something new indicates the creation of an appropriate callable if the underlying descriptor didn t do anything treat this like an instance method helper functions lru cache function decorator this class guarantees that hash will be called no more than once per element this is important because the lru_cache will hash the key multiple times on a cache miss make a cache key from optionally typed positional and keyword arguments the key is constructed in a way that is flat as possible rather than as a nested structure that would take more memory if there is only a single argument and its data type is known to cache its hash value then that argument is returned without a wrapper this saves space and improves lookup speed all of code below relies on kwds preserving the order input by the user formerly we sorted the kwds before looping the new way is much faster however it means that f x 1 y 2 will now be treated as a distinct call from f y 2 x 1 which will be cached separately least recently used cache decorator if maxsize is set to none the lru features are disabled and the cache can grow without bound if typed is true arguments of different types will be cached separately for example f decimal decimal 3 0 and f 3 0 will be treated as distinct calls with distinct results some types such as str and int may be cached separately even when typed is false arguments to the cached function must be hashable view the cache statistics named tuple hits misses maxsize currsize with f cache_info clear the cache and statistics with f cache_clear access the underlying function with f __wrapped__ see https en wikipedia org wiki cache_replacement_policies least_recently_used_ lru users should only access the lru_cache through its public api cache_info cache_clear and f __wrapped__ the internals of the lru_cache are encapsulated for thread safety and to allow the implementation to change including a possible c version negative maxsize is treated as 0 the user_function was passed in directly via the maxsize argument constants shared by all lru cache instances unique object used to signal cache misses build a key from the function arguments names for the link fields bound method to lookup a key or return none get cache size without calling len because linkedlist updates aren t threadsafe root of the circular doubly linked list initialize by pointing to self no caching just a statistics update simple caching without ordering or size limit size limited caching that tracks accesses by recency move the link to the front of the circular queue getting here means that this same key was added to the cache while the lock was released since the link update is already done we need only return the computed result and update the count of misses use the old root to store the new key and result empty the oldest link and make it the new root keep a reference to the old key and old result to prevent their ref counts from going to zero during the update that will prevent potentially arbitrary object clean up code i e __del__ from running while we re still adjusting the links now update the cache dictionary save the potentially reentrant cache key assignment for last after the root and links have been put in a consistent state put result in a new link at the front of the queue use the cache_len bound method instead of the len function which could potentially be wrapped in an lru_cache itself report cache statistics clear the cache and cache statistics cache simplified access to the infinity cache singledispatch single dispatch generic function decorator merges mros in sequences to a single mro using the c3 algorithm adapted from https www python org download releases 2 3 mro purge empty sequences find merge candidates among seq heads reject the current head it appears later remove the chosen candidate computes the method resolution order using extended c3 linearization if no abcs are given the algorithm works exactly like the built in c3 linearization used for method resolution if given abcs is a list of abstract base classes that should be inserted into the resulting mro unrelated abcs are ignored and don t end up in the result the algorithm inserts abcs where their functionality is introduced i e issubclass cls abc returns true for the class itself but returns false for all its direct base classes implicit abcs for a given class either registered or inferred from the presence of a special method like __len__ are inserted directly after the last abc explicitly listed in the mro of said class if two implicit abcs end up next to each other in the resulting mro their ordering depends on the order of types in abcs bases up to the last explicit abc are considered first if cls is the class that introduces behaviour described by an abc base insert said abc to its mro calculates the method resolution order for a given class cls includes relevant abstract base classes with their respective bases from the types iterable uses a modified c3 linearization algorithm remove entries which are already present in the __mro__ or unrelated remove entries which are strict bases of other entries they will end up in the mro anyway subclasses of the abcs in types which are also implemented by cls can be used to stabilize abc ordering favor subclasses with the biggest number of useful bases returns the best matching implementation from registry for type cls where there is no registered implementation for a specific type its method resolution order is used to find a more generic implementation note if registry does not contain an implementation for the base object type this function may return none if match is an implicit abc but there is another unrelated equally matching implicit abc refuse the temptation to guess single dispatch generic function decorator transforms a function into a generic function which can have different behaviours depending upon the type of its first argument the decorated function acts as the default implementation and additional implementations can be registered using the register attribute of the generic function there are many programs that use functools without singledispatch so we trade off making singledispatch marginally slower for the benefit of making start up of such applications slightly faster generic_func dispatch cls function implementation runs the dispatch algorithm to return the best available implementation for the given cls registered on generic_func generic_func register cls func func registers a new implementation for the given cls on a generic_func only import typing if annotation parsing is necessary descriptor version single dispatch generic method descriptor supports wrapping existing descriptors and handles non descriptor callables as instance methods see comment in singledispatch function generic_method register cls func func registers a new implementation for the given cls on a generic_method cached_property property result cached as instance attribute not all objects have __dict__ e g class defines slots
__all__ = ['update_wrapper', 'wraps', 'WRAPPER_ASSIGNMENTS', 'WRAPPER_UPDATES', 'total_ordering', 'cache', 'cmp_to_key', 'lru_cache', 'reduce', 'partial', 'partialmethod', 'singledispatch', 'singledispatchmethod', 'cached_property'] from abc import get_cache_token from collections import namedtuple from reprlib import recursive_repr from _thread import RLock GenericAlias = type(list[int]) WRAPPER_ASSIGNMENTS = ('__module__', '__name__', '__qualname__', '__doc__', '__annotations__', '__type_params__') WRAPPER_UPDATES = ('__dict__',) def update_wrapper(wrapper, wrapped, assigned = WRAPPER_ASSIGNMENTS, updated = WRAPPER_UPDATES): for attr in assigned: try: value = getattr(wrapped, attr) except AttributeError: pass else: setattr(wrapper, attr, value) for attr in updated: getattr(wrapper, attr).update(getattr(wrapped, attr, {})) wrapper.__wrapped__ = wrapped return wrapper def wraps(wrapped, assigned = WRAPPER_ASSIGNMENTS, updated = WRAPPER_UPDATES): return partial(update_wrapper, wrapped=wrapped, assigned=assigned, updated=updated) def _gt_from_lt(self, other): 'Return a > b. Computed by @total_ordering from (not a < b) and (a != b).' op_result = type(self).__lt__(self, other) if op_result is NotImplemented: return op_result return not op_result and self != other def _le_from_lt(self, other): 'Return a <= b. Computed by @total_ordering from (a < b) or (a == b).' op_result = type(self).__lt__(self, other) if op_result is NotImplemented: return op_result return op_result or self == other def _ge_from_lt(self, other): 'Return a >= b. Computed by @total_ordering from (not a < b).' op_result = type(self).__lt__(self, other) if op_result is NotImplemented: return op_result return not op_result def _ge_from_le(self, other): 'Return a >= b. Computed by @total_ordering from (not a <= b) or (a == b).' op_result = type(self).__le__(self, other) if op_result is NotImplemented: return op_result return not op_result or self == other def _lt_from_le(self, other): 'Return a < b. Computed by @total_ordering from (a <= b) and (a != b).' op_result = type(self).__le__(self, other) if op_result is NotImplemented: return op_result return op_result and self != other def _gt_from_le(self, other): 'Return a > b. Computed by @total_ordering from (not a <= b).' op_result = type(self).__le__(self, other) if op_result is NotImplemented: return op_result return not op_result def _lt_from_gt(self, other): 'Return a < b. Computed by @total_ordering from (not a > b) and (a != b).' op_result = type(self).__gt__(self, other) if op_result is NotImplemented: return op_result return not op_result and self != other def _ge_from_gt(self, other): 'Return a >= b. Computed by @total_ordering from (a > b) or (a == b).' op_result = type(self).__gt__(self, other) if op_result is NotImplemented: return op_result return op_result or self == other def _le_from_gt(self, other): 'Return a <= b. Computed by @total_ordering from (not a > b).' op_result = type(self).__gt__(self, other) if op_result is NotImplemented: return op_result return not op_result def _le_from_ge(self, other): 'Return a <= b. Computed by @total_ordering from (not a >= b) or (a == b).' op_result = type(self).__ge__(self, other) if op_result is NotImplemented: return op_result return not op_result or self == other def _gt_from_ge(self, other): 'Return a > b. Computed by @total_ordering from (a >= b) and (a != b).' op_result = type(self).__ge__(self, other) if op_result is NotImplemented: return op_result return op_result and self != other def _lt_from_ge(self, other): 'Return a < b. Computed by @total_ordering from (not a >= b).' op_result = type(self).__ge__(self, other) if op_result is NotImplemented: return op_result return not op_result _convert = { '__lt__': [('__gt__', _gt_from_lt), ('__le__', _le_from_lt), ('__ge__', _ge_from_lt)], '__le__': [('__ge__', _ge_from_le), ('__lt__', _lt_from_le), ('__gt__', _gt_from_le)], '__gt__': [('__lt__', _lt_from_gt), ('__ge__', _ge_from_gt), ('__le__', _le_from_gt)], '__ge__': [('__le__', _le_from_ge), ('__gt__', _gt_from_ge), ('__lt__', _lt_from_ge)] } def total_ordering(cls): roots = {op for op in _convert if getattr(cls, op, None) is not getattr(object, op, None)} if not roots: raise ValueError('must define at least one ordering operation: < > <= >=') root = max(roots) for opname, opfunc in _convert[root]: if opname not in roots: opfunc.__name__ = opname setattr(cls, opname, opfunc) return cls def cmp_to_key(mycmp): class K(object): __slots__ = ['obj'] def __init__(self, obj): self.obj = obj def __lt__(self, other): return mycmp(self.obj, other.obj) < 0 def __gt__(self, other): return mycmp(self.obj, other.obj) > 0 def __eq__(self, other): return mycmp(self.obj, other.obj) == 0 def __le__(self, other): return mycmp(self.obj, other.obj) <= 0 def __ge__(self, other): return mycmp(self.obj, other.obj) >= 0 __hash__ = None return K try: from _functools import cmp_to_key except ImportError: pass _initial_missing = object() def reduce(function, sequence, initial=_initial_missing): it = iter(sequence) if initial is _initial_missing: try: value = next(it) except StopIteration: raise TypeError( "reduce() of empty iterable with no initial value") from None else: value = initial for element in it: value = function(value, element) return value try: from _functools import reduce except ImportError: pass class partial: __slots__ = "func", "args", "keywords", "__dict__", "__weakref__" def __new__(cls, func, /, *args, **keywords): if not callable(func): raise TypeError("the first argument must be callable") if hasattr(func, "func"): args = func.args + args keywords = {**func.keywords, **keywords} func = func.func self = super(partial, cls).__new__(cls) self.func = func self.args = args self.keywords = keywords return self def __call__(self, /, *args, **keywords): keywords = {**self.keywords, **keywords} return self.func(*self.args, *args, **keywords) @recursive_repr() def __repr__(self): qualname = type(self).__qualname__ args = [repr(self.func)] args.extend(repr(x) for x in self.args) args.extend(f"{k}={v!r}" for (k, v) in self.keywords.items()) if type(self).__module__ == "functools": return f"functools.{qualname}({', '.join(args)})" return f"{qualname}({', '.join(args)})" def __reduce__(self): return type(self), (self.func,), (self.func, self.args, self.keywords or None, self.__dict__ or None) def __setstate__(self, state): if not isinstance(state, tuple): raise TypeError("argument to __setstate__ must be a tuple") if len(state) != 4: raise TypeError(f"expected 4 items in state, got {len(state)}") func, args, kwds, namespace = state if (not callable(func) or not isinstance(args, tuple) or (kwds is not None and not isinstance(kwds, dict)) or (namespace is not None and not isinstance(namespace, dict))): raise TypeError("invalid partial state") args = tuple(args) if kwds is None: kwds = {} elif type(kwds) is not dict: kwds = dict(kwds) if namespace is None: namespace = {} self.__dict__ = namespace self.func = func self.args = args self.keywords = kwds try: from _functools import partial except ImportError: pass class partialmethod(object): def __init__(self, func, /, *args, **keywords): if not callable(func) and not hasattr(func, "__get__"): raise TypeError("{!r} is not callable or a descriptor" .format(func)) if isinstance(func, partialmethod): self.func = func.func self.args = func.args + args self.keywords = {**func.keywords, **keywords} else: self.func = func self.args = args self.keywords = keywords def __repr__(self): args = ", ".join(map(repr, self.args)) keywords = ", ".join("{}={!r}".format(k, v) for k, v in self.keywords.items()) format_string = "{module}.{cls}({func}, {args}, {keywords})" return format_string.format(module=self.__class__.__module__, cls=self.__class__.__qualname__, func=self.func, args=args, keywords=keywords) def _make_unbound_method(self): def _method(cls_or_self, /, *args, **keywords): keywords = {**self.keywords, **keywords} return self.func(cls_or_self, *self.args, *args, **keywords) _method.__isabstractmethod__ = self.__isabstractmethod__ _method._partialmethod = self return _method def __get__(self, obj, cls=None): get = getattr(self.func, "__get__", None) result = None if get is not None: new_func = get(obj, cls) if new_func is not self.func: result = partial(new_func, *self.args, **self.keywords) try: result.__self__ = new_func.__self__ except AttributeError: pass if result is None: result = self._make_unbound_method().__get__(obj, cls) return result @property def __isabstractmethod__(self): return getattr(self.func, "__isabstractmethod__", False) __class_getitem__ = classmethod(GenericAlias) def _unwrap_partial(func): while isinstance(func, partial): func = func.func return func _CacheInfo = namedtuple("CacheInfo", ["hits", "misses", "maxsize", "currsize"]) class _HashedSeq(list): __slots__ = 'hashvalue' def __init__(self, tup, hash=hash): self[:] = tup self.hashvalue = hash(tup) def __hash__(self): return self.hashvalue def _make_key(args, kwds, typed, kwd_mark = (object(),), fasttypes = {int, str}, tuple=tuple, type=type, len=len): key = args if kwds: key += kwd_mark for item in kwds.items(): key += item if typed: key += tuple(type(v) for v in args) if kwds: key += tuple(type(v) for v in kwds.values()) elif len(key) == 1 and type(key[0]) in fasttypes: return key[0] return _HashedSeq(key) def lru_cache(maxsize=128, typed=False): if isinstance(maxsize, int): if maxsize < 0: maxsize = 0 elif callable(maxsize) and isinstance(typed, bool): user_function, maxsize = maxsize, 128 wrapper = _lru_cache_wrapper(user_function, maxsize, typed, _CacheInfo) wrapper.cache_parameters = lambda : {'maxsize': maxsize, 'typed': typed} return update_wrapper(wrapper, user_function) elif maxsize is not None: raise TypeError( 'Expected first argument to be an integer, a callable, or None') def decorating_function(user_function): wrapper = _lru_cache_wrapper(user_function, maxsize, typed, _CacheInfo) wrapper.cache_parameters = lambda : {'maxsize': maxsize, 'typed': typed} return update_wrapper(wrapper, user_function) return decorating_function def _lru_cache_wrapper(user_function, maxsize, typed, _CacheInfo): sentinel = object() make_key = _make_key PREV, NEXT, KEY, RESULT = 0, 1, 2, 3 cache = {} hits = misses = 0 full = False cache_get = cache.get cache_len = cache.__len__ lock = RLock() root = [] root[:] = [root, root, None, None] if maxsize == 0: def wrapper(*args, **kwds): nonlocal misses misses += 1 result = user_function(*args, **kwds) return result elif maxsize is None: def wrapper(*args, **kwds): nonlocal hits, misses key = make_key(args, kwds, typed) result = cache_get(key, sentinel) if result is not sentinel: hits += 1 return result misses += 1 result = user_function(*args, **kwds) cache[key] = result return result else: def wrapper(*args, **kwds): nonlocal root, hits, misses, full key = make_key(args, kwds, typed) with lock: link = cache_get(key) if link is not None: link_prev, link_next, _key, result = link link_prev[NEXT] = link_next link_next[PREV] = link_prev last = root[PREV] last[NEXT] = root[PREV] = link link[PREV] = last link[NEXT] = root hits += 1 return result misses += 1 result = user_function(*args, **kwds) with lock: if key in cache: pass elif full: oldroot = root oldroot[KEY] = key oldroot[RESULT] = result root = oldroot[NEXT] oldkey = root[KEY] oldresult = root[RESULT] root[KEY] = root[RESULT] = None del cache[oldkey] cache[key] = oldroot else: last = root[PREV] link = [last, root, key, result] last[NEXT] = root[PREV] = cache[key] = link full = (cache_len() >= maxsize) return result def cache_info(): with lock: return _CacheInfo(hits, misses, maxsize, cache_len()) def cache_clear(): nonlocal hits, misses, full with lock: cache.clear() root[:] = [root, root, None, None] hits = misses = 0 full = False wrapper.cache_info = cache_info wrapper.cache_clear = cache_clear return wrapper try: from _functools import _lru_cache_wrapper except ImportError: pass def cache(user_function, /): 'Simple lightweight unbounded cache. Sometimes called "memoize".' return lru_cache(maxsize=None)(user_function) def _c3_merge(sequences): result = [] while True: sequences = [s for s in sequences if s] if not sequences: return result for s1 in sequences: candidate = s1[0] for s2 in sequences: if candidate in s2[1:]: candidate = None break else: break if candidate is None: raise RuntimeError("Inconsistent hierarchy") result.append(candidate) for seq in sequences: if seq[0] == candidate: del seq[0] def _c3_mro(cls, abcs=None): for i, base in enumerate(reversed(cls.__bases__)): if hasattr(base, '__abstractmethods__'): boundary = len(cls.__bases__) - i break else: boundary = 0 abcs = list(abcs) if abcs else [] explicit_bases = list(cls.__bases__[:boundary]) abstract_bases = [] other_bases = list(cls.__bases__[boundary:]) for base in abcs: if issubclass(cls, base) and not any( issubclass(b, base) for b in cls.__bases__ ): abstract_bases.append(base) for base in abstract_bases: abcs.remove(base) explicit_c3_mros = [_c3_mro(base, abcs=abcs) for base in explicit_bases] abstract_c3_mros = [_c3_mro(base, abcs=abcs) for base in abstract_bases] other_c3_mros = [_c3_mro(base, abcs=abcs) for base in other_bases] return _c3_merge( [[cls]] + explicit_c3_mros + abstract_c3_mros + other_c3_mros + [explicit_bases] + [abstract_bases] + [other_bases] ) def _compose_mro(cls, types): bases = set(cls.__mro__) def is_related(typ): return (typ not in bases and hasattr(typ, '__mro__') and not isinstance(typ, GenericAlias) and issubclass(cls, typ)) types = [n for n in types if is_related(n)] def is_strict_base(typ): for other in types: if typ != other and typ in other.__mro__: return True return False types = [n for n in types if not is_strict_base(n)] type_set = set(types) mro = [] for typ in types: found = [] for sub in typ.__subclasses__(): if sub not in bases and issubclass(cls, sub): found.append([s for s in sub.__mro__ if s in type_set]) if not found: mro.append(typ) continue found.sort(key=len, reverse=True) for sub in found: for subcls in sub: if subcls not in mro: mro.append(subcls) return _c3_mro(cls, abcs=mro) def _find_impl(cls, registry): mro = _compose_mro(cls, registry.keys()) match = None for t in mro: if match is not None: if (t in registry and t not in cls.__mro__ and match not in cls.__mro__ and not issubclass(match, t)): raise RuntimeError("Ambiguous dispatch: {} or {}".format( match, t)) break if t in registry: match = t return registry.get(match) def singledispatch(func): import types, weakref registry = {} dispatch_cache = weakref.WeakKeyDictionary() cache_token = None def dispatch(cls): nonlocal cache_token if cache_token is not None: current_token = get_cache_token() if cache_token != current_token: dispatch_cache.clear() cache_token = current_token try: impl = dispatch_cache[cls] except KeyError: try: impl = registry[cls] except KeyError: impl = _find_impl(cls, registry) dispatch_cache[cls] = impl return impl def _is_union_type(cls): from typing import get_origin, Union return get_origin(cls) in {Union, types.UnionType} def _is_valid_dispatch_type(cls): if isinstance(cls, type): return True from typing import get_args return (_is_union_type(cls) and all(isinstance(arg, type) for arg in get_args(cls))) def register(cls, func=None): nonlocal cache_token if _is_valid_dispatch_type(cls): if func is None: return lambda f: register(cls, f) else: if func is not None: raise TypeError( f"Invalid first argument to `register()`. " f"{cls!r} is not a class or union type." ) ann = getattr(cls, '__annotations__', {}) if not ann: raise TypeError( f"Invalid first argument to `register()`: {cls!r}. " f"Use either `@register(some_class)` or plain `@register` " f"on an annotated function." ) func = cls from typing import get_type_hints argname, cls = next(iter(get_type_hints(func).items())) if not _is_valid_dispatch_type(cls): if _is_union_type(cls): raise TypeError( f"Invalid annotation for {argname!r}. " f"{cls!r} not all arguments are classes." ) else: raise TypeError( f"Invalid annotation for {argname!r}. " f"{cls!r} is not a class." ) if _is_union_type(cls): from typing import get_args for arg in get_args(cls): registry[arg] = func else: registry[cls] = func if cache_token is None and hasattr(cls, '__abstractmethods__'): cache_token = get_cache_token() dispatch_cache.clear() return func def wrapper(*args, **kw): if not args: raise TypeError(f'{funcname} requires at least ' '1 positional argument') return dispatch(args[0].__class__)(*args, **kw) funcname = getattr(func, '__name__', 'singledispatch function') registry[object] = func wrapper.register = register wrapper.dispatch = dispatch wrapper.registry = types.MappingProxyType(registry) wrapper._clear_cache = dispatch_cache.clear update_wrapper(wrapper, func) return wrapper class singledispatchmethod: def __init__(self, func): if not callable(func) and not hasattr(func, "__get__"): raise TypeError(f"{func!r} is not callable or a descriptor") self.dispatcher = singledispatch(func) self.func = func import weakref self._method_cache = weakref.WeakKeyDictionary() def register(self, cls, method=None): return self.dispatcher.register(cls, func=method) def __get__(self, obj, cls=None): if self._method_cache is not None: try: _method = self._method_cache[obj] except TypeError: self._method_cache = None except KeyError: pass else: return _method dispatch = self.dispatcher.dispatch def _method(*args, **kwargs): return dispatch(args[0].__class__).__get__(obj, cls)(*args, **kwargs) _method.__isabstractmethod__ = self.__isabstractmethod__ _method.register = self.register update_wrapper(_method, self.func) if self._method_cache is not None: self._method_cache[obj] = _method return _method @property def __isabstractmethod__(self): return getattr(self.func, '__isabstractmethod__', False) _NOT_FOUND = object() class cached_property: def __init__(self, func): self.func = func self.attrname = None self.__doc__ = func.__doc__ self.__module__ = func.__module__ def __set_name__(self, owner, name): if self.attrname is None: self.attrname = name elif name != self.attrname: raise TypeError( "Cannot assign the same cached_property to two different names " f"({self.attrname!r} and {name!r})." ) def __get__(self, instance, owner=None): if instance is None: return self if self.attrname is None: raise TypeError( "Cannot use cached_property instance without calling __set_name__ on it.") try: cache = instance.__dict__ except AttributeError: msg = ( f"No '__dict__' attribute on {type(instance).__name__!r} " f"instance to cache {self.attrname!r} property." ) raise TypeError(msg) from None val = cache.get(self.attrname, _NOT_FOUND) if val is _NOT_FOUND: val = self.func(instance) try: cache[self.attrname] = val except TypeError: msg = ( f"The '__dict__' attribute on {type(instance).__name__!r} instance " f"does not support item assignment for caching {self.attrname!r} property." ) raise TypeError(msg) from None return val __class_getitem__ = classmethod(GenericAlias)
path operations common to more than one os do not use directly the os specific modules import the appropriate functions from this module themselves does a path exist this is false for dangling symbolic links on systems that support them test whether a path exists returns false for broken symbolic links try os statpath except oserror valueerror return false return true this follows symbolic links so both islink and isdir can be true for the same path on systems that support symlinks def isfilepath is a path a directory this follows symbolic links so both islink and isdir can be true for the same path on systems that support symlinks return true if the pathname refers to an existing directory try st os stats except oserror valueerror return false return stat sisdirst stmode is a path a symbolic link this will always return false on systems where os lstat doesn t exist def islinkpath return the size of a file reported by os stat return os statfilename stsize def getmtimefilename return the last access time of a file reported by os stat return os statfilename statime def getctimefilename return the longest prefix of all list elements some people pass in a list of pathname parts to operate in an osagnostic fashion don t try to translate in that case as that s an abuse of the api and they are already doing what they need to be osagnostic and so they most likely won t be using an os pathlike object in the sublists are two stat buffers obtained from stat fstat or lstat describing the same file test whether two stat buffers reference the same file return s1 stino s2 stino and s1 stdev s2 stdev are two filenames really pointing to the same file def samefilef1 f2 s1 os statf1 s2 os statf2 return samestats1 s2 are two open files really referencing the same file not necessarily the same file descriptor def sameopenfilefp1 fp2 split a path in root and extension the extension is everything starting at the last dot in the last pathname component the root is everything before that it is always true that root ext p generic implementation of splitext to be parametrized with the separators split the extension from a pathname extension is everything from the last dot to the end ignoring leading dots returns root ext ext may be empty note this code must work for text and bytes strings sepindex p rfindsep if altsep altsepindex p rfindaltsep sepindex maxsepindex altsepindex dotindex p rfindextsep if dotindex sepindex skip all leading dots filenameindex sepindex 1 while filenameindex dotindex if pfilenameindex filenameindex1 extsep return p dotindex pdotindex filenameindex 1 return p p 0 def checkargtypesfuncname args hasstr hasbytes false for s in args if isinstances str hasstr true elif isinstances bytes hasbytes true else raise typeerrorf funcname argument must be str bytes or f os pathlike object not s class name r from none if hasstr and hasbytes raise typeerrorcan t mix strings and bytes in path components from none does a path exist this is false for dangling symbolic links on systems that support them test whether a path exists returns false for broken symbolic links this follows symbolic links so both islink and isdir can be true for the same path on systems that support symlinks test whether a path is a regular file is a path a directory this follows symbolic links so both islink and isdir can be true for the same path on systems that support symlinks return true if the pathname refers to an existing directory is a path a symbolic link this will always return false on systems where os lstat doesn t exist test whether a path is a symbolic link return the size of a file reported by os stat return the last modification time of a file reported by os stat return the last access time of a file reported by os stat return the metadata change time of a file reported by os stat return the longest prefix of all list elements some people pass in a list of pathname parts to operate in an os agnostic fashion don t try to translate in that case as that s an abuse of the api and they are already doing what they need to be os agnostic and so they most likely won t be using an os pathlike object in the sublists are two stat buffers obtained from stat fstat or lstat describing the same file test whether two stat buffers reference the same file are two filenames really pointing to the same file test whether two pathnames reference the same actual file or directory this is determined by the device number and i node number and raises an exception if an os stat call on either pathname fails are two open files really referencing the same file not necessarily the same file descriptor test whether two open file objects reference the same file split a path in root and extension the extension is everything starting at the last dot in the last pathname component the root is everything before that it is always true that root ext p generic implementation of splitext to be parametrized with the separators split the extension from a pathname extension is everything from the last dot to the end ignoring leading dots returns root ext ext may be empty note this code must work for text and bytes strings skip all leading dots
import os import stat __all__ = ['commonprefix', 'exists', 'getatime', 'getctime', 'getmtime', 'getsize', 'isdir', 'isfile', 'islink', 'samefile', 'sameopenfile', 'samestat'] def exists(path): try: os.stat(path) except (OSError, ValueError): return False return True def isfile(path): try: st = os.stat(path) except (OSError, ValueError): return False return stat.S_ISREG(st.st_mode) def isdir(s): try: st = os.stat(s) except (OSError, ValueError): return False return stat.S_ISDIR(st.st_mode) def islink(path): try: st = os.lstat(path) except (OSError, ValueError, AttributeError): return False return stat.S_ISLNK(st.st_mode) def getsize(filename): return os.stat(filename).st_size def getmtime(filename): return os.stat(filename).st_mtime def getatime(filename): return os.stat(filename).st_atime def getctime(filename): return os.stat(filename).st_ctime def commonprefix(m): "Given a list of pathnames, returns the longest common leading component" if not m: return '' if not isinstance(m[0], (list, tuple)): m = tuple(map(os.fspath, m)) s1 = min(m) s2 = max(m) for i, c in enumerate(s1): if c != s2[i]: return s1[:i] return s1 def samestat(s1, s2): return (s1.st_ino == s2.st_ino and s1.st_dev == s2.st_dev) def samefile(f1, f2): s1 = os.stat(f1) s2 = os.stat(f2) return samestat(s1, s2) def sameopenfile(fp1, fp2): s1 = os.fstat(fp1) s2 = os.fstat(fp2) return samestat(s1, s2) def _splitext(p, sep, altsep, extsep): sepIndex = p.rfind(sep) if altsep: altsepIndex = p.rfind(altsep) sepIndex = max(sepIndex, altsepIndex) dotIndex = p.rfind(extsep) if dotIndex > sepIndex: filenameIndex = sepIndex + 1 while filenameIndex < dotIndex: if p[filenameIndex:filenameIndex+1] != extsep: return p[:dotIndex], p[dotIndex:] filenameIndex += 1 return p, p[:0] def _check_arg_types(funcname, *args): hasstr = hasbytes = False for s in args: if isinstance(s, str): hasstr = True elif isinstance(s, bytes): hasbytes = True else: raise TypeError(f'{funcname}() argument must be str, bytes, or ' f'os.PathLike object, not {s.__class__.__name__!r}') from None if hasstr and hasbytes: raise TypeError("Can't mix strings and bytes in path components") from None
parser for command line options this module helps scripts to parse the command line arguments in sys argv it supports the same conventions as the unix getopt function including the special meanings of arguments of the form and long options similar to those supported by gnu software may be used as well via an optional third argument this module provides two functions and an exception getopt parse command line options gnugetopt like getopt but allow option and nonoption arguments to be intermixed getopterror exception class raised with opt attribute which is the option involved with the exception long option support added by lars wirzenius liwiki fi gerrit holl gerritnl linux org moved the stringbased exceptions to classbased exceptions peter strand astrandlysator liu se added gnugetopt todo for gnugetopt gnu getoptlongonly mechanism allow the caller to specify ordering returninorder option gnu extension with as first character of option string optional arguments specified by double colons an option string with a w followed by semicolon should treat w foo as foo bootstrapping python gettext s dependencies not built yet getoptargs options longoptions opts args parses command line options and parameter list args is the argument list to be parsed without the leading reference to the running program typically this means sys argv1 shortopts is the string of option letters that the script wants to recognize with options that require an argument followed by a colon i e the same format that unix getopt uses if specified longopts is a list of strings with the names of the long options which should be supported the leading characters should not be included in the option name options which require an argument should be followed by an equal sign the return value consists of two elements the first is a list of option value pairs the second is the list of program arguments left after the option list was stripped this is a trailing slice of the first argument each optionandvalue pair returned has the option as its first element prefixed with a hyphen e g x and the option argument as its second element or an empty string if the option has no argument the options occur in the list in the same order in which they were found thus allowing multiple occurrences long and short options may be mixed getoptargs options longoptions opts args this function works like getopt except that gnu style scanning mode is used by default this means that option and nonoption arguments may be intermixed the getopt function stops processing options as soon as a nonoption argument is encountered if the first character of the option string is or if the environment variable posixlycorrect is set then option processing stops as soon as a nonoption argument is encountered allow options after nonoption arguments return hasarg full option name is there an exact match no exact match so better be unique xxx since possibilities contains all valid continuations might be nice to work them into the error msg long option support added by lars wirzenius liw iki fi gerrit holl gerrit nl linux org moved the string based exceptions to class based exceptions peter åstrand astrand lysator liu se added gnu_getopt todo for gnu_getopt gnu getopt_long_only mechanism allow the caller to specify ordering return_in_order option gnu extension with as first character of option string optional arguments specified by double colons an option string with a w followed by semicolon should treat w foo as foo bootstrapping python gettext s dependencies not built yet backward compatibility getopt args options long_options opts args parses command line options and parameter list args is the argument list to be parsed without the leading reference to the running program typically this means sys argv 1 shortopts is the string of option letters that the script wants to recognize with options that require an argument followed by a colon i e the same format that unix getopt uses if specified longopts is a list of strings with the names of the long options which should be supported the leading characters should not be included in the option name options which require an argument should be followed by an equal sign the return value consists of two elements the first is a list of option value pairs the second is the list of program arguments left after the option list was stripped this is a trailing slice of the first argument each option and value pair returned has the option as its first element prefixed with a hyphen e g x and the option argument as its second element or an empty string if the option has no argument the options occur in the list in the same order in which they were found thus allowing multiple occurrences long and short options may be mixed getopt args options long_options opts args this function works like getopt except that gnu style scanning mode is used by default this means that option and non option arguments may be intermixed the getopt function stops processing options as soon as a non option argument is encountered if the first character of the option string is or if the environment variable posixly_correct is set then option processing stops as soon as a non option argument is encountered allow options after non option arguments return has_arg full option name is there an exact match no exact match so better be unique xxx since possibilities contains all valid continuations might be nice to work them into the error msg
__all__ = ["GetoptError","error","getopt","gnu_getopt"] import os try: from gettext import gettext as _ except ImportError: def _(s): return s class GetoptError(Exception): opt = '' msg = '' def __init__(self, msg, opt=''): self.msg = msg self.opt = opt Exception.__init__(self, msg, opt) def __str__(self): return self.msg error = GetoptError def getopt(args, shortopts, longopts = []): opts = [] if isinstance(longopts, str): longopts = [longopts] else: longopts = list(longopts) while args and args[0].startswith('-') and args[0] != '-': if args[0] == '--': args = args[1:] break if args[0].startswith('--'): opts, args = do_longs(opts, args[0][2:], longopts, args[1:]) else: opts, args = do_shorts(opts, args[0][1:], shortopts, args[1:]) return opts, args def gnu_getopt(args, shortopts, longopts = []): opts = [] prog_args = [] if isinstance(longopts, str): longopts = [longopts] else: longopts = list(longopts) if shortopts.startswith('+'): shortopts = shortopts[1:] all_options_first = True elif os.environ.get("POSIXLY_CORRECT"): all_options_first = True else: all_options_first = False while args: if args[0] == '--': prog_args += args[1:] break if args[0][:2] == '--': opts, args = do_longs(opts, args[0][2:], longopts, args[1:]) elif args[0][:1] == '-' and args[0] != '-': opts, args = do_shorts(opts, args[0][1:], shortopts, args[1:]) else: if all_options_first: prog_args += args break else: prog_args.append(args[0]) args = args[1:] return opts, prog_args def do_longs(opts, opt, longopts, args): try: i = opt.index('=') except ValueError: optarg = None else: opt, optarg = opt[:i], opt[i+1:] has_arg, opt = long_has_args(opt, longopts) if has_arg: if optarg is None: if not args: raise GetoptError(_('option --%s requires argument') % opt, opt) optarg, args = args[0], args[1:] elif optarg is not None: raise GetoptError(_('option --%s must not have an argument') % opt, opt) opts.append(('--' + opt, optarg or '')) return opts, args def long_has_args(opt, longopts): possibilities = [o for o in longopts if o.startswith(opt)] if not possibilities: raise GetoptError(_('option --%s not recognized') % opt, opt) if opt in possibilities: return False, opt elif opt + '=' in possibilities: return True, opt if len(possibilities) > 1: raise GetoptError(_('option --%s not a unique prefix') % opt, opt) assert len(possibilities) == 1 unique_match = possibilities[0] has_arg = unique_match.endswith('=') if has_arg: unique_match = unique_match[:-1] return has_arg, unique_match def do_shorts(opts, optstring, shortopts, args): while optstring != '': opt, optstring = optstring[0], optstring[1:] if short_has_arg(opt, shortopts): if optstring == '': if not args: raise GetoptError(_('option -%s requires argument') % opt, opt) optstring, args = args[0], args[1:] optarg, optstring = optstring, '' else: optarg = '' opts.append(('-' + opt, optarg)) return opts, args def short_has_arg(opt, shortopts): for i in range(len(shortopts)): if opt == shortopts[i] != ':': return shortopts.startswith(':', i+1) raise GetoptError(_('option -%s not recognized') % opt, opt) if __name__ == '__main__': import sys print(getopt(sys.argv[1:], "a:b", ["alpha=", "beta"]))
utilities to get a password andor the current user name getpassprompt stream prompt for a password with echo turned off getuser get the user name from the environment or password database getpasswarning this userwarning is issued when getpass cannot prevent echoing of the password contents while reading on windows the msvcrt module will be used s piers lauder original guido van rossum windows support and cleanup gregory p smith tty support getpasswarning prompt for a password with echo turned off args prompt written on stream to ask for the input default password stream a writable file object to display the prompt defaults to the tty if no tty is available defaults to sys stderr returns the sekr3t input raises eoferror if our input tty or stdin was closed getpasswarning when we were unable to turn echo off on the input always restores terminal settings before returning always try reading and writing directly on the tty first if that fails see if stdin can be controlled rawinput succeeded the final tcsetattr failed reraise instead of leaving the terminal in an unknown state we can t control the tty or stdin give up and use normal io fallbackgetpass raises an appropriate warning clean up unused file objects before blocking prompt for password with echo off using windows getwch if sys stdin is not sys stdin return fallbackgetpassprompt stream for c in prompt msvcrt putwchc pw while 1 c msvcrt getwch if c r or c n break if c 003 raise keyboardinterrupt if c b pw pw 1 else pw pw c msvcrt putwch r msvcrt putwch n return pw def fallbackgetpassprompt password streamnone import warnings warnings warncan not control echo on the terminal getpasswarning stacklevel2 if not stream stream sys stderr printwarning password input may be echoed filestream return rawinputprompt stream def rawinputprompt streamnone inputnone this doesn t save the string in the gnu readline history if not stream stream sys stderr if not input input sys stdin prompt strprompt if prompt try stream writeprompt except unicodeencodeerror use replace error handler to get as much as possible printed prompt prompt encodestream encoding replace prompt prompt decodestream encoding stream writeprompt stream flush note the python c api calls flockfile and unlock during readline line input readline if not line raise eoferror if line1 n line line 1 return line def getuser for name in logname user lname username user os environ getname if user return user try import pwd return pwd getpwuidos getuid0 except importerror keyerror as e raise oserror no username set in the environment from e bind the name getpass to the appropriate function try import termios it s possible there is an incompatible termios from the mcmillan installer make sure we have a unixcompatible termios termios tcgetattr termios tcsetattr except importerror attributeerror try import msvcrt except importerror getpass fallbackgetpass else getpass wingetpass else getpass unixgetpass s piers lauder original guido van rossum windows support and cleanup gregory p smith tty support getpasswarning prompt for a password with echo turned off args prompt written on stream to ask for the input default password stream a writable file object to display the prompt defaults to the tty if no tty is available defaults to sys stderr returns the sekr3t input raises eoferror if our input tty or stdin was closed getpasswarning when we were unable to turn echo off on the input always restores terminal settings before returning always try reading and writing directly on the tty first if that fails see if stdin can be controlled a copy to save 3 lflags issue7208 _raw_input succeeded the final tcsetattr failed reraise instead of leaving the terminal in an unknown state we can t control the tty or stdin give up and use normal io fallback_getpass raises an appropriate warning clean up unused file objects before blocking prompt for password with echo off using windows getwch this doesn t save the string in the gnu readline history use replace error handler to get as much as possible printed note the python c api calls flockfile and unlock during readline get the username from the environment or password database first try various environment variables then the password database this works on windows as long as username is set any failure to find a username raises oserror versionchanged 3 13 previously various exceptions beyond just exc oserror were raised bind the name getpass to the appropriate function it s possible there is an incompatible termios from the mcmillan installer make sure we have a unix compatible termios
import contextlib import io import os import sys __all__ = ["getpass","getuser","GetPassWarning"] class GetPassWarning(UserWarning): pass def unix_getpass(prompt='Password: ', stream=None): passwd = None with contextlib.ExitStack() as stack: try: fd = os.open('/dev/tty', os.O_RDWR|os.O_NOCTTY) tty = io.FileIO(fd, 'w+') stack.enter_context(tty) input = io.TextIOWrapper(tty) stack.enter_context(input) if not stream: stream = input except OSError: stack.close() try: fd = sys.stdin.fileno() except (AttributeError, ValueError): fd = None passwd = fallback_getpass(prompt, stream) input = sys.stdin if not stream: stream = sys.stderr if fd is not None: try: old = termios.tcgetattr(fd) new = old[:] new[3] &= ~termios.ECHO tcsetattr_flags = termios.TCSAFLUSH if hasattr(termios, 'TCSASOFT'): tcsetattr_flags |= termios.TCSASOFT try: termios.tcsetattr(fd, tcsetattr_flags, new) passwd = _raw_input(prompt, stream, input=input) finally: termios.tcsetattr(fd, tcsetattr_flags, old) stream.flush() except termios.error: if passwd is not None: raise if stream is not input: stack.close() passwd = fallback_getpass(prompt, stream) stream.write('\n') return passwd def win_getpass(prompt='Password: ', stream=None): if sys.stdin is not sys.__stdin__: return fallback_getpass(prompt, stream) for c in prompt: msvcrt.putwch(c) pw = "" while 1: c = msvcrt.getwch() if c == '\r' or c == '\n': break if c == '\003': raise KeyboardInterrupt if c == '\b': pw = pw[:-1] else: pw = pw + c msvcrt.putwch('\r') msvcrt.putwch('\n') return pw def fallback_getpass(prompt='Password: ', stream=None): import warnings warnings.warn("Can not control echo on the terminal.", GetPassWarning, stacklevel=2) if not stream: stream = sys.stderr print("Warning: Password input may be echoed.", file=stream) return _raw_input(prompt, stream) def _raw_input(prompt="", stream=None, input=None): if not stream: stream = sys.stderr if not input: input = sys.stdin prompt = str(prompt) if prompt: try: stream.write(prompt) except UnicodeEncodeError: prompt = prompt.encode(stream.encoding, 'replace') prompt = prompt.decode(stream.encoding) stream.write(prompt) stream.flush() line = input.readline() if not line: raise EOFError if line[-1] == '\n': line = line[:-1] return line def getuser(): for name in ('LOGNAME', 'USER', 'LNAME', 'USERNAME'): user = os.environ.get(name) if user: return user try: import pwd return pwd.getpwuid(os.getuid())[0] except (ImportError, KeyError) as e: raise OSError('No username set in the environment') from e try: import termios termios.tcgetattr, termios.tcsetattr except (ImportError, AttributeError): try: import msvcrt except ImportError: getpass = fallback_getpass else: getpass = win_getpass else: getpass = unix_getpass
filename globbing utility import contextlib import os import re import fnmatch import itertools import stat import sys all glob iglob escape def globpathname rootdirnone dirfdnone recursivefalse includehiddenfalse return listiglobpathname rootdirrootdir dirfddirfd recursiverecursive includehiddenincludehidden def iglobpathname rootdirnone dirfdnone recursivefalse includehiddenfalse sys auditglob glob pathname recursive sys auditglob glob2 pathname recursive rootdir dirfd if rootdir is not none rootdir os fspathrootdir else rootdir pathname 0 it iglobpathname rootdir dirfd recursive false includehiddenincludehidden if not pathname or recursive and isrecursivepathname 2 try s nextit skip empty string if s it itertools chains it except stopiteration pass return it def iglobpathname rootdir dirfd recursive dironly includehiddenfalse dirname basename os path splitpathname if not hasmagicpathname assert not dironly if basename if lexistsjoinrootdir pathname dirfd yield pathname else patterns ending with a slash should match only directories if isdirjoinrootdir dirname dirfd yield pathname return if not dirname if recursive and isrecursivebasename yield from glob2rootdir basename dirfd dironly includehiddenincludehidden else yield from glob1rootdir basename dirfd dironly includehiddenincludehidden return os path split returns the argument itself as a dirname if it is a drive or unc path prevent an infinite recursion if a drive or unc path contains magic characters i e r c if dirname pathname and hasmagicdirname dirs iglobdirname rootdir dirfd recursive true includehiddenincludehidden else dirs dirname if hasmagicbasename if recursive and isrecursivebasename globindir glob2 else globindir glob1 else globindir glob0 for dirname in dirs for name in globindirjoinrootdir dirname basename dirfd dironly includehiddenincludehidden yield os path joindirname name these 2 helper functions nonrecursively glob inside a literal directory they return a list of basenames glob1 accepts a pattern while glob0 takes a literal basename so it only has to check for its existence def glob1dirname pattern dirfd dironly includehiddenfalse names listdirdirname dirfd dironly if includehidden or not ishiddenpattern names x for x in names if includehidden or not ishiddenx return fnmatch filternames pattern def glob0dirname basename dirfd dironly includehiddenfalse if basename if lexistsjoindirname basename dirfd return basename else os path split returns an empty basename for paths ending with a directory separator qx should match only directories if isdirdirname dirfd return basename return following functions are not public but can be used by thirdparty code def glob0dirname pattern return glob0dirname pattern none false def glob1dirname pattern return glob1dirname pattern none false this helper function recursively yields relative pathnames inside a literal directory def glob2dirname pattern dirfd dironly includehiddenfalse assert isrecursivepattern yield pattern 0 yield from rlistdirdirname dirfd dironly includehiddenincludehidden if dironly is false yields all file names inside a directory if dironly is true yields only directory names def iterdirdirname dirfd dironly try fd none fsencode none if dirfd is not none if dirname fd arg os opendirname diropenflags dirfddirfd else arg dirfd if isinstancedirname bytes fsencode os fsencode elif dirname arg dirname elif isinstancedirname bytes arg bytesos curdir ascii else arg os curdir try with os scandirarg as it for entry in it try if not dironly or entry isdir if fsencode is not none yield fsencodeentry name else yield entry name except oserror pass finally if fd is not none os closefd except oserror return def listdirdirname dirfd dironly with contextlib closingiterdirdirname dirfd dironly as it return listit recursively yields relative pathnames inside a literal directory def rlistdirdirname dirfd dironly includehiddenfalse names listdirdirname dirfd dironly for x in names if includehidden or not ishiddenx yield x path joindirname x if dirname else x for y in rlistdirpath dirfd dironly includehiddenincludehidden yield joinx y def lexistspathname dirfd same as os path lexists but with dirfd if dirfd is none return os path lexistspathname try os lstatpathname dirfddirfd except oserror valueerror return false else return true def isdirpathname dirfd same as os path isdir but with dirfd if dirfd is none return os path isdirpathname try st os statpathname dirfddirfd except oserror valueerror return false else return stat sisdirst stmode def joindirname basename it is common if dirname or basename is empty if not dirname or not basename return dirname or basename return os path joindirname basename magiccheck re compile magiccheckbytes re compileb def hasmagics if isinstances bytes match magiccheckbytes searchs else match magiccheck searchs return match is not none def ishiddenpath return path0 in b 0 def isrecursivepattern if isinstancepattern bytes return pattern b else return pattern def escapepathname escaping is done by wrapping any of between square brackets metacharacters do not work in the drive part and shouldn t be escaped drive pathname os path splitdrivepathname if isinstancepathname bytes pathname magiccheckbytes subbr 1 pathname else pathname magiccheck subr 1 pathname return drive pathname diropenflags os ordonly getattros odirectory 0 def translatepat recursivefalse includehiddenfalse sepsnone if not seps if os path altsep seps os path sep os path altsep else seps os path sep escapedseps joinmapre escape seps anysep f escapedseps if lenseps 1 else escapedseps notsep f escapedseps if includehidden onelastsegment f notsep onesegment f onelastsegmentanysep anysegments f anysep anylastsegments else onelastsegment f escapedseps notsep onesegment f onelastsegmentanysep anysegments f onesegment anylastsegments f anysegments onelastsegment results parts re splitanysep pat lastpartidx lenparts 1 for idx part in enumerateparts if part results appendonesegment if idx lastpartidx else onelastsegment continue if recursive if part if idx lastpartidx if partsidx 1 results appendanysegments else results appendanylastsegments continue elif in part raise valueerrorinvalid pattern can only be an entire path component if part if not includehidden and part0 in results appendr results extendfnmatch translatepart f notsep notsep if idx lastpartidx results appendanysep res joinresults return fr s resz return a list of paths matching a pathname pattern the pattern may contain simple shell style wildcards a la fnmatch unlike fnmatch filenames starting with a dot are special cases that are not matched by and patterns by default if include_hidden is true the patterns will match hidden directories if recursive is true the pattern will match any files and zero or more directories and subdirectories return an iterator which yields the paths matching a pathname pattern the pattern may contain simple shell style wildcards a la fnmatch however unlike fnmatch filenames starting with a dot are special cases that are not matched by and patterns if recursive is true the pattern will match any files and zero or more directories and subdirectories skip empty string patterns ending with a slash should match only directories os path split returns the argument itself as a dirname if it is a drive or unc path prevent an infinite recursion if a drive or unc path contains magic characters i e r c these 2 helper functions non recursively glob inside a literal directory they return a list of basenames _glob1 accepts a pattern while _glob0 takes a literal basename so it only has to check for its existence os path split returns an empty basename for paths ending with a directory separator q x should match only directories following functions are not public but can be used by third party code this helper function recursively yields relative pathnames inside a literal directory if dironly is false yields all file names inside a directory if dironly is true yields only directory names recursively yields relative pathnames inside a literal directory same as os path lexists but with dir_fd same as os path isdir but with dir_fd it is common if dirname or basename is empty escape all special characters escaping is done by wrapping any of between square brackets metacharacters do not work in the drive part and shouldn t be escaped translate a pathname with shell wildcards to a regular expression if recursive is true the pattern segment will match any number of path segments if appears outside its own segment valueerror will be raised if include_hidden is true wildcards can match path segments beginning with a dot if a sequence of separator characters is given to seps they will be used to split the pattern into segments and match path separators if not given os path sep and os path altsep where available are used
import contextlib import os import re import fnmatch import itertools import stat import sys __all__ = ["glob", "iglob", "escape"] def glob(pathname, *, root_dir=None, dir_fd=None, recursive=False, include_hidden=False): return list(iglob(pathname, root_dir=root_dir, dir_fd=dir_fd, recursive=recursive, include_hidden=include_hidden)) def iglob(pathname, *, root_dir=None, dir_fd=None, recursive=False, include_hidden=False): sys.audit("glob.glob", pathname, recursive) sys.audit("glob.glob/2", pathname, recursive, root_dir, dir_fd) if root_dir is not None: root_dir = os.fspath(root_dir) else: root_dir = pathname[:0] it = _iglob(pathname, root_dir, dir_fd, recursive, False, include_hidden=include_hidden) if not pathname or recursive and _isrecursive(pathname[:2]): try: s = next(it) if s: it = itertools.chain((s,), it) except StopIteration: pass return it def _iglob(pathname, root_dir, dir_fd, recursive, dironly, include_hidden=False): dirname, basename = os.path.split(pathname) if not has_magic(pathname): assert not dironly if basename: if _lexists(_join(root_dir, pathname), dir_fd): yield pathname else: if _isdir(_join(root_dir, dirname), dir_fd): yield pathname return if not dirname: if recursive and _isrecursive(basename): yield from _glob2(root_dir, basename, dir_fd, dironly, include_hidden=include_hidden) else: yield from _glob1(root_dir, basename, dir_fd, dironly, include_hidden=include_hidden) return if dirname != pathname and has_magic(dirname): dirs = _iglob(dirname, root_dir, dir_fd, recursive, True, include_hidden=include_hidden) else: dirs = [dirname] if has_magic(basename): if recursive and _isrecursive(basename): glob_in_dir = _glob2 else: glob_in_dir = _glob1 else: glob_in_dir = _glob0 for dirname in dirs: for name in glob_in_dir(_join(root_dir, dirname), basename, dir_fd, dironly, include_hidden=include_hidden): yield os.path.join(dirname, name) def _glob1(dirname, pattern, dir_fd, dironly, include_hidden=False): names = _listdir(dirname, dir_fd, dironly) if include_hidden or not _ishidden(pattern): names = (x for x in names if include_hidden or not _ishidden(x)) return fnmatch.filter(names, pattern) def _glob0(dirname, basename, dir_fd, dironly, include_hidden=False): if basename: if _lexists(_join(dirname, basename), dir_fd): return [basename] else: if _isdir(dirname, dir_fd): return [basename] return [] def glob0(dirname, pattern): return _glob0(dirname, pattern, None, False) def glob1(dirname, pattern): return _glob1(dirname, pattern, None, False) def _glob2(dirname, pattern, dir_fd, dironly, include_hidden=False): assert _isrecursive(pattern) yield pattern[:0] yield from _rlistdir(dirname, dir_fd, dironly, include_hidden=include_hidden) def _iterdir(dirname, dir_fd, dironly): try: fd = None fsencode = None if dir_fd is not None: if dirname: fd = arg = os.open(dirname, _dir_open_flags, dir_fd=dir_fd) else: arg = dir_fd if isinstance(dirname, bytes): fsencode = os.fsencode elif dirname: arg = dirname elif isinstance(dirname, bytes): arg = bytes(os.curdir, 'ASCII') else: arg = os.curdir try: with os.scandir(arg) as it: for entry in it: try: if not dironly or entry.is_dir(): if fsencode is not None: yield fsencode(entry.name) else: yield entry.name except OSError: pass finally: if fd is not None: os.close(fd) except OSError: return def _listdir(dirname, dir_fd, dironly): with contextlib.closing(_iterdir(dirname, dir_fd, dironly)) as it: return list(it) def _rlistdir(dirname, dir_fd, dironly, include_hidden=False): names = _listdir(dirname, dir_fd, dironly) for x in names: if include_hidden or not _ishidden(x): yield x path = _join(dirname, x) if dirname else x for y in _rlistdir(path, dir_fd, dironly, include_hidden=include_hidden): yield _join(x, y) def _lexists(pathname, dir_fd): if dir_fd is None: return os.path.lexists(pathname) try: os.lstat(pathname, dir_fd=dir_fd) except (OSError, ValueError): return False else: return True def _isdir(pathname, dir_fd): if dir_fd is None: return os.path.isdir(pathname) try: st = os.stat(pathname, dir_fd=dir_fd) except (OSError, ValueError): return False else: return stat.S_ISDIR(st.st_mode) def _join(dirname, basename): if not dirname or not basename: return dirname or basename return os.path.join(dirname, basename) magic_check = re.compile('([*?[])') magic_check_bytes = re.compile(b'([*?[])') def has_magic(s): if isinstance(s, bytes): match = magic_check_bytes.search(s) else: match = magic_check.search(s) return match is not None def _ishidden(path): return path[0] in ('.', b'.'[0]) def _isrecursive(pattern): if isinstance(pattern, bytes): return pattern == b'**' else: return pattern == '**' def escape(pathname): drive, pathname = os.path.splitdrive(pathname) if isinstance(pathname, bytes): pathname = magic_check_bytes.sub(br'[\1]', pathname) else: pathname = magic_check.sub(r'[\1]', pathname) return drive + pathname _dir_open_flags = os.O_RDONLY | getattr(os, 'O_DIRECTORY', 0) def translate(pat, *, recursive=False, include_hidden=False, seps=None): if not seps: if os.path.altsep: seps = (os.path.sep, os.path.altsep) else: seps = os.path.sep escaped_seps = ''.join(map(re.escape, seps)) any_sep = f'[{escaped_seps}]' if len(seps) > 1 else escaped_seps not_sep = f'[^{escaped_seps}]' if include_hidden: one_last_segment = f'{not_sep}+' one_segment = f'{one_last_segment}{any_sep}' any_segments = f'(?:.+{any_sep})?' any_last_segments = '.*' else: one_last_segment = f'[^{escaped_seps}.]{not_sep}*' one_segment = f'{one_last_segment}{any_sep}' any_segments = f'(?:{one_segment})*' any_last_segments = f'{any_segments}(?:{one_last_segment})?' results = [] parts = re.split(any_sep, pat) last_part_idx = len(parts) - 1 for idx, part in enumerate(parts): if part == '*': results.append(one_segment if idx < last_part_idx else one_last_segment) continue if recursive: if part == '**': if idx < last_part_idx: if parts[idx + 1] != '**': results.append(any_segments) else: results.append(any_last_segments) continue elif '**' in part: raise ValueError("Invalid pattern: '**' can only be an entire path component") if part: if not include_hidden and part[0] in '*?': results.append(r'(?!\.)') results.extend(fnmatch._translate(part, f'{not_sep}*', not_sep)) if idx < last_part_idx: results.append(any_sep) res = ''.join(results) return fr'(?s:{res})\Z'
the node this class is augmenting number of predecessors generally 0 when this value falls to 0 and is returned by getready this is set to nodeout and when the node is marked done by a call to done set to nodedone list of successor nodes the list can contain duplicated elements as long as they re all reflected in the successor s npredecessors attribute subclass of valueerror raised by topologicalsorter prepare if cycles exist in the working graph if multiple cycles exist only one undefined choice among them will be reported and included in the exception the detected cycle can be accessed via the second element in the args attribute of the exception instance and consists in a list of nodes such that each node is in the graph an immediate predecessor of the next node in the list in the reported list the first and the last node will be the same to make it clear that it is cyclic provides functionality to topologically sort a graph of hashable nodes def initself graphnone self node2info self readynodes none self npassedout 0 self nfinished 0 if graph is not none for node predecessors in graph items self addnode predecessors def getnodeinfoself node if result self node2info getnode is none self node2infonode result nodeinfonode return result def addself node predecessors if self readynodes is not none raise valueerrornodes cannot be added after a call to prepare create the node predecessor edges nodeinfo self getnodeinfonode nodeinfo npredecessors lenpredecessors create the predecessor node edges for pred in predecessors predinfo self getnodeinfopred predinfo successors appendnode def prepareself if self readynodes is not none raise valueerrorcannot prepare more than once self readynodes i node for i in self node2info values if i npredecessors 0 readynodes is set before we look for cycles on purpose if the user wants to catch the cycleerror that s fine they can continue using the instance to grab as many nodes as possible before cycles block more progress cycle self findcycle if cycle raise cycleerrorfnodes are in a cycle cycle def getreadyself if self readynodes is none raise valueerrorprepare must be called first get the nodes that are ready and mark them result tupleself readynodes n2i self node2info for node in result n2inode npredecessors nodeout clean the list of nodes that are ready and update the counter of nodes that we have returned self readynodes clear self npassedout lenresult return result def isactiveself if self readynodes is none raise valueerrorprepare must be called first return self nfinished self npassedout or boolself readynodes def boolself return self isactive def doneself nodes if self readynodes is none raise valueerrorprepare must be called first n2i self node2info for node in nodes check if we know about this node it was added previously using add if nodeinfo n2i getnode is none raise valueerrorfnode node r was not added using add if the node has not being returned marked as ready previously inform the user stat nodeinfo npredecessors if stat nodeout if stat 0 raise valueerror fnode node r was not passed out still not ready elif stat nodedone raise valueerrorfnode node r was already marked done else assert false fnode node r unknown status stat mark the node as processed nodeinfo npredecessors nodedone go to all the successors and reduce the number of predecessors collecting all the ones that are ready to be returned in the next getready call for successor in nodeinfo successors successorinfo n2isuccessor successorinfo npredecessors 1 if successorinfo npredecessors 0 self readynodes appendsuccessor self nfinished 1 def findcycleself n2i self node2info stack itstack seen set node2stacki for node in n2i if node in seen continue while true if node in seen if we have seen already the node and is in the current stack we have found a cycle if node in node2stacki return stacknode2stackinode node else go on to get next successor else seen addnode itstack appenditern2inode successors next node2stackinode lenstack stack appendnode backtrack to the topmost stack entry with at least another successor while stack try node itstack1 break except stopiteration del node2stackistack pop itstack pop else break return none def staticorderself self prepare while self isactive nodegroup self getready yield from nodegroup self donenodegroup classgetitem classmethodgenericalias the node this class is augmenting number of predecessors generally 0 when this value falls to 0 and is returned by get_ready this is set to _node_out and when the node is marked done by a call to done set to _node_done list of successor nodes the list can contain duplicated elements as long as they re all reflected in the successor s npredecessors attribute subclass of valueerror raised by topologicalsorter prepare if cycles exist in the working graph if multiple cycles exist only one undefined choice among them will be reported and included in the exception the detected cycle can be accessed via the second element in the args attribute of the exception instance and consists in a list of nodes such that each node is in the graph an immediate predecessor of the next node in the list in the reported list the first and the last node will be the same to make it clear that it is cyclic provides functionality to topologically sort a graph of hashable nodes add a new node and its predecessors to the graph both the node and all elements in predecessors must be hashable if called multiple times with the same node argument the set of dependencies will be the union of all dependencies passed in it is possible to add a node with no dependencies predecessors is not provided as well as provide a dependency twice if a node that has not been provided before is included among predecessors it will be automatically added to the graph with no predecessors of its own raises valueerror if called after prepare create the node predecessor edges create the predecessor node edges mark the graph as finished and check for cycles in the graph if any cycle is detected cycleerror will be raised but get_ready can still be used to obtain as many nodes as possible until cycles block more progress after a call to this function the graph cannot be modified and therefore no more nodes can be added using add ready_nodes is set before we look for cycles on purpose if the user wants to catch the cycleerror that s fine they can continue using the instance to grab as many nodes as possible before cycles block more progress return a tuple of all the nodes that are ready initially it returns all nodes with no predecessors once those are marked as processed by calling done further calls will return all new nodes that have all their predecessors already processed once no more progress can be made empty tuples are returned raises valueerror if called without calling prepare previously get the nodes that are ready and mark them clean the list of nodes that are ready and update the counter of nodes that we have returned return true if more progress can be made and false otherwise progress can be made if cycles do not block the resolution and either there are still nodes ready that haven t yet been returned by get_ready or the number of nodes marked done is less than the number that have been returned by get_ready raises valueerror if called without calling prepare previously marks a set of nodes returned by get_ready as processed this method unblocks any successor of each node in nodes for being returned in the future by a call to get_ready raises exec valueerror if any node in nodes has already been marked as processed by a previous call to this method if a node was not added to the graph by using add or if called without calling prepare previously or if node has not yet been returned by get_ready check if we know about this node it was added previously using add if the node has not being returned marked as ready previously inform the user mark the node as processed go to all the successors and reduce the number of predecessors collecting all the ones that are ready to be returned in the next get_ready call if we have seen already the node and is in the current stack we have found a cycle else go on to get next successor backtrack to the topmost stack entry with at least another successor returns an iterable of nodes in a topological order the particular order that is returned may depend on the specific order in which the items were inserted in the graph using this method does not require to call prepare or done if any cycle is detected exc cycleerror will be raised
from types import GenericAlias __all__ = ["TopologicalSorter", "CycleError"] _NODE_OUT = -1 _NODE_DONE = -2 class _NodeInfo: __slots__ = "node", "npredecessors", "successors" def __init__(self, node): self.node = node self.npredecessors = 0 self.successors = [] class CycleError(ValueError): pass class TopologicalSorter: def __init__(self, graph=None): self._node2info = {} self._ready_nodes = None self._npassedout = 0 self._nfinished = 0 if graph is not None: for node, predecessors in graph.items(): self.add(node, *predecessors) def _get_nodeinfo(self, node): if (result := self._node2info.get(node)) is None: self._node2info[node] = result = _NodeInfo(node) return result def add(self, node, *predecessors): if self._ready_nodes is not None: raise ValueError("Nodes cannot be added after a call to prepare()") nodeinfo = self._get_nodeinfo(node) nodeinfo.npredecessors += len(predecessors) for pred in predecessors: pred_info = self._get_nodeinfo(pred) pred_info.successors.append(node) def prepare(self): if self._ready_nodes is not None: raise ValueError("cannot prepare() more than once") self._ready_nodes = [ i.node for i in self._node2info.values() if i.npredecessors == 0 ] cycle = self._find_cycle() if cycle: raise CycleError(f"nodes are in a cycle", cycle) def get_ready(self): if self._ready_nodes is None: raise ValueError("prepare() must be called first") result = tuple(self._ready_nodes) n2i = self._node2info for node in result: n2i[node].npredecessors = _NODE_OUT self._ready_nodes.clear() self._npassedout += len(result) return result def is_active(self): if self._ready_nodes is None: raise ValueError("prepare() must be called first") return self._nfinished < self._npassedout or bool(self._ready_nodes) def __bool__(self): return self.is_active() def done(self, *nodes): if self._ready_nodes is None: raise ValueError("prepare() must be called first") n2i = self._node2info for node in nodes: if (nodeinfo := n2i.get(node)) is None: raise ValueError(f"node {node!r} was not added using add()") stat = nodeinfo.npredecessors if stat != _NODE_OUT: if stat >= 0: raise ValueError( f"node {node!r} was not passed out (still not ready)" ) elif stat == _NODE_DONE: raise ValueError(f"node {node!r} was already marked done") else: assert False, f"node {node!r}: unknown status {stat}" nodeinfo.npredecessors = _NODE_DONE for successor in nodeinfo.successors: successor_info = n2i[successor] successor_info.npredecessors -= 1 if successor_info.npredecessors == 0: self._ready_nodes.append(successor) self._nfinished += 1 def _find_cycle(self): n2i = self._node2info stack = [] itstack = [] seen = set() node2stacki = {} for node in n2i: if node in seen: continue while True: if node in seen: if node in node2stacki: return stack[node2stacki[node] :] + [node] else: seen.add(node) itstack.append(iter(n2i[node].successors).__next__) node2stacki[node] = len(stack) stack.append(node) while stack: try: node = itstack[-1]() break except StopIteration: del node2stacki[stack.pop()] itstack.pop() else: break return None def static_order(self): self.prepare() while self.is_active(): node_group = self.get_ready() yield from node_group self.done(*node_group) __class_getitem__ = classmethod(GenericAlias)
hmac keyedhashing for message authentication module implements the hmac algorithm as described by rfc 2104 the size of the digests returned by hmac depends on the underlying hashing module used use digestsize from the instance of hmac instead rfc 2104 hmac class also complies with rfc 4231 this supports the api for cryptographic hash functions pep 247 create a new hmac object key bytes or buffer key for the keyed hash object msg bytes or buffer initial input for the hash or none digestmod a hash name suitable for hashlib new or a hashlib constructor returning a new hash object or a module supporting pep 247 required as of 3 8 despite its position after the optional msg argument passing it as a keyword argument is recommended though not required for legacy api reasons self blocksize is the default blocksize self blocksize is effective block size as well as the public api attribute feed data from msg into this hashing object inst self hmac or self inner inst updatemsg def copyself call new directly to avoid the expensive init other self class newself class other digestsize self digestsize if self hmac other hmac self hmac copy other inner other outer none else other hmac none other inner self inner copy other outer self outer copy return other def currentself if self hmac return self hmac else h self outer copy h updateself inner digest return h def digestself h self current return h digest def hexdigestself h self current return h hexdigest def newkey msgnone digestmod return hmackey msg digestmod def digestkey msg digest if hashopenssl is not none and isinstancedigest str functype try return hashopenssl hmacdigestkey msg digest except hashopenssl unsupporteddigestmoderror pass if callabledigest digestcons digest elif isinstancedigest str digestcons lambda db hashlib newdigest d else digestcons lambda db digest newd inner digestcons outer digestcons blocksize getattrinner blocksize 64 if lenkey blocksize key digestconskey digest key key b x00 blocksize lenkey inner updatekey translatetrans36 outer updatekey translatetrans5c inner updatemsg outer updateinner digest return outer digest builtin type the size of the digests returned by hmac depends on the underlying hashing module used use digest_size from the instance of hmac instead rfc 2104 hmac class also complies with rfc 4231 this supports the api for cryptographic hash functions pep 247 512 bit hmac can be changed in subclasses create a new hmac object key bytes or buffer key for the keyed hash object msg bytes or buffer initial input for the hash or none digestmod a hash name suitable for hashlib new or a hashlib constructor returning a new hash object or a module supporting pep 247 required as of 3 8 despite its position after the optional msg argument passing it as a keyword argument is recommended though not required for legacy api reasons self blocksize is the default blocksize self block_size is effective block size as well as the public api attribute feed data from msg into this hashing object return a separate copy of this hashing object an update to this copy won t affect the original object call __new__ directly to avoid the expensive __init__ return a hash object for the current state to be used only internally with digest and hexdigest return the hash value of this hashing object this returns the hmac value as bytes the object is not altered in any way by this function you can continue updating the object after calling this function like digest but returns a string of hexadecimal digits instead create a new hashing object and return it key bytes or buffer the starting key for the hash msg bytes or buffer initial input for the hash or none digestmod a hash name suitable for hashlib new or a hashlib constructor returning a new hash object or a module supporting pep 247 required as of 3 8 despite its position after the optional msg argument passing it as a keyword argument is recommended though not required for legacy api reasons you can now feed arbitrary bytes into the object using its update method and can ask for the hash value at any time by calling its digest or hexdigest methods fast inline implementation of hmac key bytes or buffer the key for the keyed hash object msg bytes or buffer input message digest a hash name suitable for hashlib new for best performance or a hashlib constructor returning a new hash object or a module supporting pep 247
import warnings as _warnings try: import _hashlib as _hashopenssl except ImportError: _hashopenssl = None _functype = None from _operator import _compare_digest as compare_digest else: compare_digest = _hashopenssl.compare_digest _functype = type(_hashopenssl.openssl_sha256) import hashlib as _hashlib trans_5C = bytes((x ^ 0x5C) for x in range(256)) trans_36 = bytes((x ^ 0x36) for x in range(256)) digest_size = None class HMAC: blocksize = 64 __slots__ = ( "_hmac", "_inner", "_outer", "block_size", "digest_size" ) def __init__(self, key, msg=None, digestmod=''): if not isinstance(key, (bytes, bytearray)): raise TypeError("key: expected bytes or bytearray, but got %r" % type(key).__name__) if not digestmod: raise TypeError("Missing required parameter 'digestmod'.") if _hashopenssl and isinstance(digestmod, (str, _functype)): try: self._init_hmac(key, msg, digestmod) except _hashopenssl.UnsupportedDigestmodError: self._init_old(key, msg, digestmod) else: self._init_old(key, msg, digestmod) def _init_hmac(self, key, msg, digestmod): self._hmac = _hashopenssl.hmac_new(key, msg, digestmod=digestmod) self.digest_size = self._hmac.digest_size self.block_size = self._hmac.block_size def _init_old(self, key, msg, digestmod): if callable(digestmod): digest_cons = digestmod elif isinstance(digestmod, str): digest_cons = lambda d=b'': _hashlib.new(digestmod, d) else: digest_cons = lambda d=b'': digestmod.new(d) self._hmac = None self._outer = digest_cons() self._inner = digest_cons() self.digest_size = self._inner.digest_size if hasattr(self._inner, 'block_size'): blocksize = self._inner.block_size if blocksize < 16: _warnings.warn('block_size of %d seems too small; using our ' 'default of %d.' % (blocksize, self.blocksize), RuntimeWarning, 2) blocksize = self.blocksize else: _warnings.warn('No block_size attribute on given digest object; ' 'Assuming %d.' % (self.blocksize), RuntimeWarning, 2) blocksize = self.blocksize if len(key) > blocksize: key = digest_cons(key).digest() self.block_size = blocksize key = key.ljust(blocksize, b'\0') self._outer.update(key.translate(trans_5C)) self._inner.update(key.translate(trans_36)) if msg is not None: self.update(msg) @property def name(self): if self._hmac: return self._hmac.name else: return f"hmac-{self._inner.name}" def update(self, msg): inst = self._hmac or self._inner inst.update(msg) def copy(self): other = self.__class__.__new__(self.__class__) other.digest_size = self.digest_size if self._hmac: other._hmac = self._hmac.copy() other._inner = other._outer = None else: other._hmac = None other._inner = self._inner.copy() other._outer = self._outer.copy() return other def _current(self): if self._hmac: return self._hmac else: h = self._outer.copy() h.update(self._inner.digest()) return h def digest(self): h = self._current() return h.digest() def hexdigest(self): h = self._current() return h.hexdigest() def new(key, msg=None, digestmod=''): return HMAC(key, msg, digestmod) def digest(key, msg, digest): if _hashopenssl is not None and isinstance(digest, (str, _functype)): try: return _hashopenssl.hmac_digest(key, msg, digest) except _hashopenssl.UnsupportedDigestmodError: pass if callable(digest): digest_cons = digest elif isinstance(digest, str): digest_cons = lambda d=b'': _hashlib.new(digest, d) else: digest_cons = lambda d=b'': digest.new(d) inner = digest_cons() outer = digest_cons() blocksize = getattr(inner, 'block_size', 64) if len(key) > blocksize: key = digest_cons(key).digest() key = key + b'\x00' * (blocksize - len(key)) inner.update(key.translate(trans_36)) outer.update(key.translate(trans_5C)) inner.update(msg) outer.update(inner.digest()) return outer.digest()
http status codes and reason phrases status codes from the following rfcs are all observed rfc 7231 hypertext transfer protocol http1 1 obsoletes 2616 rfc 6585 additional http status codes rfc 3229 delta encoding in http rfc 4918 http extensions for webdav obsoletes 2518 rfc 5842 binding extensions to webdav rfc 7238 permanent redirect rfc 2295 transparent content negotiation in http rfc 2774 an http extension framework rfc 7725 an http status code to report legal obstacles rfc 7540 hypertext transfer protocol version 2 http2 rfc 2324 hyper text coffee pot control protocol htcpcp1 0 rfc 8297 an http status code for indicating hints rfc 8470 using early data in http informational success redirection client error server errors http methods and descriptions methods from the following rfcs are all observed rfc 7231 hypertext transfer protocol http1 1 obsoletes 2616 rfc 5789 patch method for http http status codes and reason phrases status codes from the following rfcs are all observed rfc 7231 hypertext transfer protocol http 1 1 obsoletes 2616 rfc 6585 additional http status codes rfc 3229 delta encoding in http rfc 4918 http extensions for webdav obsoletes 2518 rfc 5842 binding extensions to webdav rfc 7238 permanent redirect rfc 2295 transparent content negotiation in http rfc 2774 an http extension framework rfc 7725 an http status code to report legal obstacles rfc 7540 hypertext transfer protocol version 2 http 2 rfc 2324 hyper text coffee pot control protocol htcpcp 1 0 rfc 8297 an http status code for indicating hints rfc 8470 using early data in http informational success redirection client error server errors http methods and descriptions methods from the following rfcs are all observed rfc 7231 hypertext transfer protocol http 1 1 obsoletes 2616 rfc 5789 patch method for http
from enum import StrEnum, IntEnum, _simple_enum __all__ = ['HTTPStatus', 'HTTPMethod'] @_simple_enum(IntEnum) class HTTPStatus: def __new__(cls, value, phrase, description=''): obj = int.__new__(cls, value) obj._value_ = value obj.phrase = phrase obj.description = description return obj @property def is_informational(self): return 100 <= self <= 199 @property def is_success(self): return 200 <= self <= 299 @property def is_redirection(self): return 300 <= self <= 399 @property def is_client_error(self): return 400 <= self <= 499 @property def is_server_error(self): return 500 <= self <= 599 CONTINUE = 100, 'Continue', 'Request received, please continue' SWITCHING_PROTOCOLS = (101, 'Switching Protocols', 'Switching to new protocol; obey Upgrade header') PROCESSING = 102, 'Processing' EARLY_HINTS = 103, 'Early Hints' OK = 200, 'OK', 'Request fulfilled, document follows' CREATED = 201, 'Created', 'Document created, URL follows' ACCEPTED = (202, 'Accepted', 'Request accepted, processing continues off-line') NON_AUTHORITATIVE_INFORMATION = (203, 'Non-Authoritative Information', 'Request fulfilled from cache') NO_CONTENT = 204, 'No Content', 'Request fulfilled, nothing follows' RESET_CONTENT = 205, 'Reset Content', 'Clear input form for further input' PARTIAL_CONTENT = 206, 'Partial Content', 'Partial content follows' MULTI_STATUS = 207, 'Multi-Status' ALREADY_REPORTED = 208, 'Already Reported' IM_USED = 226, 'IM Used' MULTIPLE_CHOICES = (300, 'Multiple Choices', 'Object has several resources -- see URI list') MOVED_PERMANENTLY = (301, 'Moved Permanently', 'Object moved permanently -- see URI list') FOUND = 302, 'Found', 'Object moved temporarily -- see URI list' SEE_OTHER = 303, 'See Other', 'Object moved -- see Method and URL list' NOT_MODIFIED = (304, 'Not Modified', 'Document has not changed since given time') USE_PROXY = (305, 'Use Proxy', 'You must use proxy specified in Location to access this resource') TEMPORARY_REDIRECT = (307, 'Temporary Redirect', 'Object moved temporarily -- see URI list') PERMANENT_REDIRECT = (308, 'Permanent Redirect', 'Object moved permanently -- see URI list') BAD_REQUEST = (400, 'Bad Request', 'Bad request syntax or unsupported method') UNAUTHORIZED = (401, 'Unauthorized', 'No permission -- see authorization schemes') PAYMENT_REQUIRED = (402, 'Payment Required', 'No payment -- see charging schemes') FORBIDDEN = (403, 'Forbidden', 'Request forbidden -- authorization will not help') NOT_FOUND = (404, 'Not Found', 'Nothing matches the given URI') METHOD_NOT_ALLOWED = (405, 'Method Not Allowed', 'Specified method is invalid for this resource') NOT_ACCEPTABLE = (406, 'Not Acceptable', 'URI not available in preferred format') PROXY_AUTHENTICATION_REQUIRED = (407, 'Proxy Authentication Required', 'You must authenticate with this proxy before proceeding') REQUEST_TIMEOUT = (408, 'Request Timeout', 'Request timed out; try again later') CONFLICT = 409, 'Conflict', 'Request conflict' GONE = (410, 'Gone', 'URI no longer exists and has been permanently removed') LENGTH_REQUIRED = (411, 'Length Required', 'Client must specify Content-Length') PRECONDITION_FAILED = (412, 'Precondition Failed', 'Precondition in headers is false') REQUEST_ENTITY_TOO_LARGE = (413, 'Request Entity Too Large', 'Entity is too large') REQUEST_URI_TOO_LONG = (414, 'Request-URI Too Long', 'URI is too long') UNSUPPORTED_MEDIA_TYPE = (415, 'Unsupported Media Type', 'Entity body in unsupported format') REQUESTED_RANGE_NOT_SATISFIABLE = (416, 'Requested Range Not Satisfiable', 'Cannot satisfy request range') EXPECTATION_FAILED = (417, 'Expectation Failed', 'Expect condition could not be satisfied') IM_A_TEAPOT = (418, 'I\'m a Teapot', 'Server refuses to brew coffee because it is a teapot.') MISDIRECTED_REQUEST = (421, 'Misdirected Request', 'Server is not able to produce a response') UNPROCESSABLE_ENTITY = 422, 'Unprocessable Entity' LOCKED = 423, 'Locked' FAILED_DEPENDENCY = 424, 'Failed Dependency' TOO_EARLY = 425, 'Too Early' UPGRADE_REQUIRED = 426, 'Upgrade Required' PRECONDITION_REQUIRED = (428, 'Precondition Required', 'The origin server requires the request to be conditional') TOO_MANY_REQUESTS = (429, 'Too Many Requests', 'The user has sent too many requests in ' 'a given amount of time ("rate limiting")') REQUEST_HEADER_FIELDS_TOO_LARGE = (431, 'Request Header Fields Too Large', 'The server is unwilling to process the request because its header ' 'fields are too large') UNAVAILABLE_FOR_LEGAL_REASONS = (451, 'Unavailable For Legal Reasons', 'The server is denying access to the ' 'resource as a consequence of a legal demand') INTERNAL_SERVER_ERROR = (500, 'Internal Server Error', 'Server got itself in trouble') NOT_IMPLEMENTED = (501, 'Not Implemented', 'Server does not support this operation') BAD_GATEWAY = (502, 'Bad Gateway', 'Invalid responses from another server/proxy') SERVICE_UNAVAILABLE = (503, 'Service Unavailable', 'The server cannot process the request due to a high load') GATEWAY_TIMEOUT = (504, 'Gateway Timeout', 'The gateway server did not receive a timely response') HTTP_VERSION_NOT_SUPPORTED = (505, 'HTTP Version Not Supported', 'Cannot fulfill request') VARIANT_ALSO_NEGOTIATES = 506, 'Variant Also Negotiates' INSUFFICIENT_STORAGE = 507, 'Insufficient Storage' LOOP_DETECTED = 508, 'Loop Detected' NOT_EXTENDED = 510, 'Not Extended' NETWORK_AUTHENTICATION_REQUIRED = (511, 'Network Authentication Required', 'The client needs to authenticate to gain network access') @_simple_enum(StrEnum) class HTTPMethod: def __new__(cls, value, description): obj = str.__new__(cls, value) obj._value_ = value obj.description = description return obj def __repr__(self): return "<%s.%s>" % (self.__class__.__name__, self._name_) CONNECT = 'CONNECT', 'Establish a connection to the server.' DELETE = 'DELETE', 'Remove the target.' GET = 'GET', 'Retrieve the target.' HEAD = 'HEAD', 'Same as GET, but only retrieve the status line and header section.' OPTIONS = 'OPTIONS', 'Describe the communication options for the target.' PATCH = 'PATCH', 'Apply partial modifications to a target.' POST = 'POST', 'Perform target-specific processing with the request payload.' PUT = 'PUT', 'Replace the target with the request payload.' TRACE = 'TRACE', 'Perform a message loop-back test along the path to the target.'
import email parser import email message import errno import http import io import re import socket import sys import collections abc from urllib parse import urlsplit httpmessage parseheaders and the http status code constants are intentionally omitted for simplicity all httpresponse httpconnection httpexception notconnected unknownprotocol unknowntransferencoding unimplementedfilemode incompleteread invalidurl improperconnectionstate cannotsendrequest cannotsendheader responsenotready badstatusline linetoolong remotedisconnected error responses httpport 80 httpsport 443 unknown unknown connection states csidle idle csreqstarted requeststarted csreqsent requestsent hack to maintain backwards compatibility globals updatehttp httpstatus members another hack to maintain backwards compatibility mapping status codes to official w3c names responses v v phrase for v in http httpstatus members values maximal line length when calling readline maxline 65536 maxheaders 100 header namevalue abnf http tools ietf orghtmlrfc7230section3 2 vchar x217e obstext x80ff headerfield fieldname ows fieldvalue ows fieldname token fieldvalue fieldcontent obsfold fieldcontent fieldvchar 1 sp htab fieldvchar fieldvchar vchar obstext obsfold crlf 1 sp htab obsolete line folding see section 3 2 4 token 1tchar tchar digit alpha any vchar except delimiters vchar defined in http tools ietf orghtmlrfc5234appendixb 1 the patterns for both name and value are more lenient than rfc definitions to allow for backwards compatibility islegalheadername re compilerb s rn fullmatch isillegalheadervalue re compilerb n tr tn search these characters are not allowed within http url paths see https tools ietf orghtmlrfc3986section3 3 and the https tools ietf orghtmlrfc3986appendixa pchar definition prevents cve20199740 includes control characters such as rn we don t restrict chars above x7f as putrequest limits us to ascii containsdisallowedurlpcharre re compile x00x20x7f arguably only these should allowed isallowedurlpcharsre re compiler azaz09 we are more lenient for assumed real world compatibility purposes these characters are not allowed within http method names to prevent http header injection containsdisallowedmethodpcharre re compile x00x1f we always set the contentlength header for these methods because some servers will otherwise respond with a 411 methodsexpectingbody patch post put def encodedata name data remove interface scope from ipv6 address encname percent encname partitionb if percent assert encname startswithb encname encname b return encname class httpmessageemail message message xxx the only usage of this method is in http server cgihttprequesthandler maybe move the code there so that it doesn t need to be part of the public api the api has never been defined so this could cause backwards compatibility issues def getallmatchingheadersself name name name lower n lenname lst hit 0 for line in self keys if line n lower name hit 1 elif not line 1 isspace hit 0 if hit lst appendline return lst def readheadersfp headers while true line fp readlinemaxline 1 if lenline maxline raise linetoolongheader line headers appendline if lenheaders maxheaders raise httpexceptiongot more than d headers maxheaders if line in b rn b n b break return headers def parseheaderlinesheaderlines classhttpmessage hstring b joinheaderlines decode iso88591 return email parser parserclassclass parsestrhstring def parseheadersfp classhttpmessage see rfc 2616 sec 19 6 and rfc 1945 sec 6 for details the bytes from the socket object are iso88591 strings see rfc 2616 sec 2 2 which notes an exception for mimeencoded text following rfc 2047 the basic status line parsing only accepts iso88591 if the response includes a contentlength header we need to make sure that the client doesn t read more than the specified number of bytes if it does it will block until the server times out and closes the connection this will happen if a self fp read is done without a size whether self fp is buffered or not so no self fp read by clients unless they know what they are doing the httpresponse object is returned via urllib the clients of http and urllib expect different attributes for the headers headers is used here and supports urllib msg is provided as a backwards compatibility layer for http clients from the statusline of the response presumably the server closed the connection before sending a valid response empty version will cause next test to fail the status code is a threedigit number we ve already started reading the response read until we get a non100 response skip the header from the 100 response some servers might still return 0 9 treat it as 1 0 anyway are we using the chunkedstyle of transfer encoding will the connection close at the end of the response do we have a contentlength note rfc 2616 s4 4 3 says we ignore this if trenc is chunked does the body have a fixed length of zero if the connection remains open and we aren t using chunked and a contentlength was not provided then assume that the connection will close an http1 1 proxy is assumed to stay open unless explicitly closed some http1 0 implementations have support for persistent connections using rules different than http1 1 for older http keepalive indicates persistent connection at least akamai returns a connection keepalive header which was supposed to be sent by the client proxyconnection is a netscape hack otherwise assume it will close these implementations are for the benefit of io bufferedreader xxx this class should probably be revised to act more like the raw stream that bufferedreader expects always returns true return true end of raw stream methods def isclosedself note it is possible that we will not ever call self close this case occurs when willclose is true length is none and we read up to the last byte but not past it implies if willclose is false then self close will always be called meaning self isclosed is meaningful read and return the response body or up to the next amt bytes if self fp is none return b if self method head self closeconn return b if self chunked return self readchunkedamt if amt is not none if self length is not none and amt self length clip the read to the end of response amt self length s self fp readamt if not s and amt ideally we would raise incompleteread if the contentlength wasn t satisfied but it might break compatibility self closeconn elif self length is not none self length lens if not self length self closeconn return s else amount is not given unbounded read so we must check self length if self length is none s self fp read else try s self safereadself length except incompleteread self closeconn raise self length 0 self closeconn we read everything return s def readintoself b if self fp is none return 0 if self method head self closeconn return 0 if self chunked return self readintochunkedb if self length is not none if lenb self length clip the read to the end of response b memoryviewb0 self length we do not use saferead here because this may be a willclose connection and the user is reading more bytes than will be provided for example reading in 1k chunks n self fp readintob if not n and b ideally we would raise incompleteread if the contentlength wasn t satisfied but it might break compatibility self closeconn elif self length is not none self length n if not self length self closeconn return n def readnextchunksizeself read the next chunk size from the file line self fp readlinemaxline 1 if lenline maxline raise linetoolongchunk size i line findb if i 0 line line i strip chunkextensions try return intline 16 except valueerror close the connection as protocol synchronisation is probably lost self closeconn raise def readanddiscardtrailerself read and discard trailer up to the crlf terminator note we shouldn t have any trailers while true line self fp readlinemaxline 1 if lenline maxline raise linetoolongtrailer line if not line a vanishingly small number of sites eof without sending the trailer break if line in b rn b n b break def getchunkleftself return self chunkleft reading a new chunk if necessary chunkleft 0 at the end of the current chunk need to close it chunkleft none no current chunk should read next this function returns nonzero or none if the last chunk has been read chunkleft self chunkleft if not chunkleft can be 0 or none if chunkleft is not none we are at the end of chunk discard chunk end self saferead2 toss the crlf at the end of the chunk try chunkleft self readnextchunksize except valueerror raise incompletereadb if chunkleft 0 last chunk 10 chunkextension crlf self readanddiscardtrailer we read everything close the file self closeconn chunkleft none self chunkleft chunkleft return chunkleft def readchunkedself amtnone assert self chunked unknown value try while chunkleft self getchunkleft is not none if amt is not none and amt chunkleft value appendself safereadamt self chunkleft chunkleft amt break value appendself safereadchunkleft if amt is not none amt chunkleft self chunkleft 0 return b joinvalue except incompleteread as exc raise incompletereadb joinvalue from exc def readintochunkedself b assert self chunked unknown totalbytes 0 mvb memoryviewb try while true chunkleft self getchunkleft if chunkleft is none return totalbytes if lenmvb chunkleft n self safereadintomvb self chunkleft chunkleft n return totalbytes n tempmvb mvb chunkleft n self safereadintotempmvb mvb mvbn totalbytes n self chunkleft 0 except incompleteread raise incompletereadbytesb0 totalbytes def safereadself amt data self fp readamt if lendata amt raise incompletereaddata amtlendata return data def safereadintoself b read with at most one underlying system call if at least one byte is buffered return that instead having this enables iobase readline to read more than one byte at a time fallback to iobase readline which uses peek and read strictly speaking getchunkleft may cause more than one read but that is ok since that is to satisfy the chunked protocol strictly speaking getchunkleft may cause more than one read but that is ok since that is to satisfy the chunked protocol peek is allowed to return more than requested just request the entire chunk and truncate what we get returns the value of the header matching name if there are multiple matching headers the values are combined into a single string separated by commas and spaces if no matching header is found returns default or none if the default is not specified if the headers are unknown raises http client responsenotready return list of header value tuples if self headers is none raise responsenotready return listself headers items we override iobase iter so that it doesn t check for closedness def iterself return self for compatibility with oldstyle urllib responses def infoself return self headers def geturlself return self url def getcodeself return self status def createhttpscontexthttpversion function also used by urllib request to be able to set the checkhostname attribute on a context object context ssl createdefaulthttpscontext send alpn extension to indicate http1 1 protocol if httpversion 11 context setalpnprotocols http1 1 enable pha for tls 1 3 connections if available if context posthandshakeauth is not none context posthandshakeauth true return context class httpconnection httpvsn 11 httpvsnstr http1 1 responseclass httpresponse defaultport httpport autoopen 1 debuglevel 0 staticmethod def istextiostream return isinstancestream io textiobase staticmethod def getcontentlengthbody method if body is none do an explicit check for not none here to distinguish between unset and set but empty if method upper in methodsexpectingbody return 0 else return none if hasattrbody read filelike object return none try does it implement the buffer protocol bytes bytearray array mv memoryviewbody return mv nbytes except typeerror pass if isinstancebody str return lenbody return none def initself host portnone timeoutsocket globaldefaulttimeout sourceaddressnone blocksize8192 self timeout timeout self sourceaddress sourceaddress self blocksize blocksize self sock none self buffer self response none self state csidle self method none self tunnelhost none self tunnelport none self tunnelheaders self rawproxyheaders none self host self port self gethostporthost port self validatehostself host this is stored as an instance variable to allow unit tests to replace it with a suitable mockup self createconnection socket createconnection def settunnelself host portnone headersnone if self sock raise runtimeerrorcan t set up tunnel for established connection self tunnelhost self tunnelport self gethostporthost port if headers self tunnelheaders headers copy else self tunnelheaders clear if not anyheader lower host for header in self tunnelheaders encodedhost self tunnelhost encodeidna decodeascii self tunnelheadershost s d encodedhost self tunnelport def gethostportself host port if port is none i host rfind j host rfind ipv6 addresses have if i j try port inthosti1 except valueerror if hosti1 http foo com http foo com port self defaultport else raise invalidurlnonnumeric port s hosti1 host host i else port self defaultport if host and host0 and host1 host host1 1 return host port def setdebuglevelself level self debuglevel level def tunnelself connect bconnect s d srn self tunnelhost encodeidna self tunnelport self httpvsnstr encodeascii headers connect for header value in self tunnelheaders items headers appendfheader valuern encodelatin1 headers appendbrn making a single send call instead of one per line encourages the host os to use a more optimal packet size instead of potentially emitting a series of small packets self sendb joinheaders del headers response self responseclassself sock methodself method try version code message response readstatus self rawproxyheaders readheadersresponse fp if self debuglevel 0 for header in self rawproxyheaders print header header decode if code http httpstatus ok self close raise oserrorftunnel connection failed code message strip finally response close def getproxyresponseheadersself return parseheaderlinesself rawproxyheaders if self rawproxyheaders is not none else none def connectself might fail in oss that don t implement tcpnodelay close the connection to the http server self state csidle try sock self sock if sock self sock none sock close close it manually there may be other refs finally response self response if response self response none response close def sendself data if self sock is none if self autoopen self connect else raise notconnected if self debuglevel 0 printsend reprdata if hasattrdata read if self debuglevel 0 printsending a readable encode self istextiodata if encode and self debuglevel 0 printencoding file using iso88591 while datablock data readself blocksize if encode datablock datablock encodeiso88591 sys audithttp client send self datablock self sock sendalldatablock return sys audithttp client send self data try self sock sendalldata except typeerror if isinstancedata collections abc iterable for d in data self sock sendalld else raise typeerrordata should be a byteslike object or an iterable got r typedata def outputself s self buffer appends def readreadableself readable if self debuglevel 0 printreading a readable encode self istextioreadable if encode and self debuglevel 0 printencoding file using iso88591 while datablock readable readself blocksize if encode datablock datablock encodeiso88591 yield datablock def sendoutputself messagebodynone encodechunkedfalse self buffer extendb b msg brn joinself buffer del self buffer self sendmsg if messagebody is not none create a consistent interface to messagebody if hasattrmessagebody read let filelike take precedence over bytelike this is needed to allow the current position of mmap ed files to be taken into account chunks self readreadablemessagebody else try this is solely to check to see if messagebody implements the buffer api it would be easier to capture if pyobjectcheckbuffer was exposed to python memoryviewmessagebody except typeerror try chunks itermessagebody except typeerror raise typeerrormessagebody should be a byteslike object or an iterable got r typemessagebody else the object implements the buffer interface and can be passed directly into socket methods chunks messagebody for chunk in chunks if not chunk if self debuglevel 0 print zero length chunk ignored continue if encodechunked and self httpvsn 11 chunked encoding chunk f lenchunk xrn encode ascii chunk b rn self sendchunk if encodechunked and self httpvsn 11 end chunked transfer self sendb 0rnrn def putrequestself method url skiphostfalse skipacceptencodingfalse if a prior response has been completed then forget about it if self response and self response isclosed self response none in certain cases we cannot issue another request on this connection this occurs when 1 we are in the process of sending a request csreqstarted 2 a response to a previous request has signalled that it is going to close the connection upon completion 3 the headers for the previous response have not been read thus we cannot determine whether point 2 is true csreqsent if there is no prior response then we can request at will if point 2 is true then we will have passed the socket to the response effectively meaning there is no prior response and will open a new one when a new request is made note if a prior response exists then we can start a new request we are not allowed to begin fetching the response to this new request however until that prior response is complete if self state csidle self state csreqstarted else raise cannotsendrequestself state self validatemethodmethod save the method for use later in the response phase self method method url url or self validatepathurl request s s s method url self httpvsnstr self outputself encoderequestrequest if self httpvsn 11 issue some standard headers for better http1 1 compliance if not skiphost this header is issued only for http1 1 connections more specifically this means it is only issued when the client uses the new httpconnection class backwardscompat clients will be using http1 0 and those clients may be issuing this header themselves we should not issue it twice some web servers such as apache barf when they see two host headers if we need a nonstandard port include it in the header if the request is going through a proxy but the host of the actual url not the host of the proxy netloc if url startswith http nil netloc nil nil nil urlspliturl if netloc try netlocenc netloc encodeascii except unicodeencodeerror netlocenc netloc encodeidna self putheader host stripipv6ifacenetlocenc else if self tunnelhost host self tunnelhost port self tunnelport else host self host port self port try hostenc host encodeascii except unicodeencodeerror hostenc host encodeidna as per rfc 273 ipv6 address should be wrapped with when used as host header if in host hostenc b hostenc b hostenc stripipv6ifacehostenc if port self defaultport self putheader host hostenc else hostenc hostenc decodeascii self putheader host s s hostenc port note we are assuming that clients will not attempt to set these headers since this library must deal with the consequences this also means that when the supporting libraries are updated to recognize other forms then this code should be changed removed or updated we only want a contentencoding of identity since we don t support encodings such as xgzip or xdeflate if not skipacceptencoding self putheader acceptencoding identity we can accept chunked transferencodings but no others note no te header implies only chunked self putheader te chunked if te is supplied in the header then it must appear in a connection header self putheader connection te else for http1 0 the server will assume not chunked pass def encoderequestself request ascii also helps prevent cve20199740 return request encode ascii def validatemethodself method prevent http header injection validate a url for putrequest prevent cve20199740 match containsdisallowedurlpcharre searchurl if match raise invalidurlfurl can t contain control characters url r ffound at least match group r def validatehostself host prevent cve201918348 send a request header line to the server for example h putheader accept texthtml indicate that the last header line has been sent to the server this method sends the request to the server the optional messagebody argument can be used to pass a message body associated with the request send a complete request to the server self sendrequestmethod url body headers encodechunked def sendrequestself method url body headers encodechunked honor explicitly requested host and acceptencoding headers headernames frozensetk lower for k in headers skips if host in headernames skips skiphost 1 if acceptencoding in headernames skips skipacceptencoding 1 self putrequestmethod url skips chunked encoding will happen if http1 1 is used and either the caller passes encodechunkedtrue or the following conditions hold 1 contentlength has not been explicitly set 2 the body is a file or iterable but not a str or byteslike 3 transferencoding has not been explicitly set by the caller if contentlength not in headernames only chunk body if not explicitly set for backwards compatibility assuming the client code is already handling the chunking if transferencoding not in headernames if contentlength cannot be automatically determined fall back to chunked encoding encodechunked false contentlength self getcontentlengthbody method if contentlength is none if body is not none if self debuglevel 0 print unable to determine size of r body encodechunked true self putheader transferencoding chunked else self putheader contentlength strcontentlength else encodechunked false for hdr value in headers items self putheaderhdr value if isinstancebody str rfc 2616 section 3 7 1 says that text default has a default charset of iso88591 body encodebody body self endheadersbody encodechunkedencodechunked def getresponseself if a prior response has been completed then forget about it if self response and self response isclosed self response none if a prior response exists then it must be completed otherwise we cannot read this response s header to determine the connectionclose behavior note if a prior response existed but was connectionclose then the socket and response were made independent of this httpconnection object since a new request requires that we open a whole new connection this means the prior response had one of two states 1 willclose this connection was reset and the prior socket and response operate independently 2 persistent the response was retained and we await its isclosed status to become true if self state csreqsent or self response raise responsenotreadyself state if self debuglevel 0 response self responseclassself sock self debuglevel methodself method else response self responseclassself sock methodself method try try response begin except connectionerror self close raise assert response willclose unknown self state csidle if response willclose this effectively passes the connection to the response self close else remember this so we can tell when it is complete self response response return response except response close raise try import ssl except importerror pass else class httpsconnectionhttpconnection this class allows communication via ssl defaultport httpsport def initself host portnone timeoutsocket globaldefaulttimeout sourceaddressnone contextnone blocksize8192 superhttpsconnection self inithost port timeout sourceaddress blocksizeblocksize if context is none context createhttpscontextself httpvsn self context context def connectself connect to a host on a given ssl port super connect if self tunnelhost serverhostname self tunnelhost else serverhostname self host self sock self context wrapsocketself sock serverhostnameserverhostname all appendhttpsconnection class httpexceptionexception subclasses that define an init must call exception init or define self args otherwise str will fail pass class notconnectedhttpexception pass class invalidurlhttpexception pass class unknownprotocolhttpexception def initself version self args version self version version class unknowntransferencodinghttpexception pass class unimplementedfilemodehttpexception pass class incompletereadhttpexception def initself partial expectednone self args partial self partial partial self expected expected def reprself if self expected is not none e i more expected self expected else e return si bytes reads self class name lenself partial e str object str class improperconnectionstatehttpexception pass class cannotsendrequestimproperconnectionstate pass class cannotsendheaderimproperconnectionstate pass class responsenotreadyimproperconnectionstate pass class badstatuslinehttpexception def initself line if not line line reprline self args line self line line class linetoolonghttpexception def initself linetype httpexception initself got more than d bytes when reading s maxline linetype class remotedisconnectedconnectionreseterror badstatusline def initself pos kw badstatusline initself connectionreseterror initself pos kw for backwards compatibility error httpexception http 1 1 client library intro stuff goes here other stuff too httpconnection goes through a number of states which define when a client may legally make another request or fetch the response for a particular request this diagram details these state transitions null httpconnection v idle putrequest v request started putheader endheaders v request sent _____________________________ getresponse raises response getresponse connectionerror v v unread response idle response headers read ____________________ response read putrequest v v idle req started unread response ______ response read putheader endheaders v v request started req sent unread response response read v request sent this diagram presents the following rules a second request may not be started until response headers read a response object cannot be retrieved until request sent there is no differentiation between an unread response body and a partially read response body note this enforcement is applied by the httpconnection class the httpresponse class does not enforce this state machine which implies sophisticated clients may accelerate the request response pipeline caution should be taken though accelerating the states beyond the above pattern may imply knowledge of the server s connection close behavior for certain requests for example it is impossible to tell whether the server will close the connection until the response headers have been read this means that further requests cannot be placed into the pipeline until it is known that the server will not be closing the connection logical state __state __response idle _cs_idle none request started _cs_req_started none request sent _cs_req_sent none unread response _cs_idle response_class req started unread response _cs_req_started response_class req sent unread response _cs_req_sent response_class httpmessage parse_headers and the http status code constants are intentionally omitted for simplicity connection states hack to maintain backwards compatibility another hack to maintain backwards compatibility mapping status codes to official w3c names maximal line length when calling readline header name value abnf http tools ietf org html rfc7230 section 3 2 vchar x21 7e obs text x80 ff header field field name ows field value ows field name token field value field content obs fold field content field vchar 1 sp htab field vchar field vchar vchar obs text obs fold crlf 1 sp htab obsolete line folding see section 3 2 4 token 1 tchar tchar _ digit alpha any vchar except delimiters vchar defined in http tools ietf org html rfc5234 appendix b 1 the patterns for both name and value are more lenient than rfc definitions to allow for backwards compatibility these characters are not allowed within http url paths see https tools ietf org html rfc3986 section 3 3 and the https tools ietf org html rfc3986 appendix a pchar definition prevents cve 2019 9740 includes control characters such as r n we don t restrict chars above x7f as putrequest limits us to ascii arguably only these _should_ allowed _is_allowed_url_pchars_re re compile r a za z0 9 _ we are more lenient for assumed real world compatibility purposes these characters are not allowed within http method names to prevent http header injection we always set the content length header for these methods because some servers will otherwise respond with a 411 call data encode latin 1 but show a better error message remove interface scope from ipv6 address xxx the only usage of this method is in http server cgihttprequesthandler maybe move the code there so that it doesn t need to be part of the public api the api has never been defined so this could cause backwards compatibility issues find all header lines matching a given header name look through the list of headers and find all lines matching a given header name and their continuation lines a list of the lines is returned without interpretation if the header does not occur an empty list is returned if the header occurs multiple times all occurrences are returned case is not important in the header name reads potential header lines into a list from a file pointer length of line is limited by _maxline and number of headers is limited by _maxheaders parses only rfc2822 headers from header lines email parser wants to see strings rather than bytes but a textiowrapper around self rfile would buffer too many bytes from the stream bytes which we later need to read as bytes so we read the correct bytes here as bytes for email parser to parse parses only rfc2822 headers from a file pointer see rfc 2616 sec 19 6 and rfc 1945 sec 6 for details the bytes from the socket object are iso 8859 1 strings see rfc 2616 sec 2 2 which notes an exception for mime encoded text following rfc 2047 the basic status line parsing only accepts iso 8859 1 if the response includes a content length header we need to make sure that the client doesn t read more than the specified number of bytes if it does it will block until the server times out and closes the connection this will happen if a self fp read is done without a size whether self fp is buffered or not so no self fp read by clients unless they know what they are doing the httpresponse object is returned via urllib the clients of http and urllib expect different attributes for the headers headers is used here and supports urllib msg is provided as a backwards compatibility layer for http clients from the status line of the response http version status code reason phrase is chunked being used bytes left to read in current chunk number of bytes left in response conn will close at end of response presumably the server closed the connection before sending a valid response empty version will cause next test to fail the status code is a three digit number we ve already started reading the response read until we get a non 100 response skip the header from the 100 response some servers might still return 0 9 treat it as 1 0 anyway use http 1 1 code for http 1 x where x 1 are we using the chunked style of transfer encoding will the connection close at the end of the response do we have a content length note rfc 2616 s4 4 3 says we ignore this if tr_enc is chunked ignore nonsensical negative lengths does the body have a fixed length of zero 1xx codes if the connection remains open and we aren t using chunked and a content length was not provided then assume that the connection will close an http 1 1 proxy is assumed to stay open unless explicitly closed some http 1 0 implementations have support for persistent connections using rules different than http 1 1 for older http keep alive indicates persistent connection at least akamai returns a connection keep alive header which was supposed to be sent by the client proxy connection is a netscape hack otherwise assume it will close set closed flag these implementations are for the benefit of io bufferedreader xxx this class should probably be revised to act more like the raw stream that bufferedreader expects always returns true end of raw stream methods true if the connection is closed note it is possible that we will not ever call self close this case occurs when will_close is true length is none and we read up to the last byte but not past it implies if will_close is false then self close will always be called meaning self isclosed is meaningful read and return the response body or up to the next amt bytes clip the read to the end of response ideally we would raise incompleteread if the content length wasn t satisfied but it might break compatibility amount is not given unbounded read so we must check self length we read everything read up to len b bytes into bytearray b and return the number of bytes read clip the read to the end of response we do not use _safe_read here because this may be a will_close connection and the user is reading more bytes than will be provided for example reading in 1k chunks ideally we would raise incompleteread if the content length wasn t satisfied but it might break compatibility read the next chunk size from the file strip chunk extensions close the connection as protocol synchronisation is probably lost read and discard trailer up to the crlf terminator note we shouldn t have any trailers a vanishingly small number of sites eof without sending the trailer return self chunk_left reading a new chunk if necessary chunk_left 0 at the end of the current chunk need to close it chunk_left none no current chunk should read next this function returns non zero or none if the last chunk has been read can be 0 or none we are at the end of chunk discard chunk end toss the crlf at the end of the chunk last chunk 1 0 chunk extension crlf we read everything close the file read the number of bytes requested this function should be used when amt bytes should be present for reading if the bytes are truly not available due to eof then the incompleteread exception can be used to detect the problem same as _safe_read but for reading into a buffer read with at most one underlying system call if at least one byte is buffered return that instead having this enables iobase readline to read more than one byte at a time fallback to iobase readline which uses peek and read strictly speaking _get_chunk_left may cause more than one read but that is ok since that is to satisfy the chunked protocol if n is negative or larger than chunk_left strictly speaking _get_chunk_left may cause more than one read but that is ok since that is to satisfy the chunked protocol peek doesn t worry about protocol eof peek is allowed to return more than requested just request the entire chunk and truncate what we get returns the value of the header matching name if there are multiple matching headers the values are combined into a single string separated by commas and spaces if no matching header is found returns default or none if the default is not specified if the headers are unknown raises http client responsenotready return list of header value tuples we override iobase __iter__ so that it doesn t check for closed ness for compatibility with old style urllib responses returns an instance of the class mimetools message containing meta information associated with the url when the method is http these headers are those returned by the server at the head of the retrieved html page including content length and content type when the method is ftp a content length header will be present if as is now usual the server passed back a file length in response to the ftp retrieval request a content type header will be present if the mime type can be guessed when the method is local file returned headers will include a date representing the file s last modified time a content length giving file size and a content type containing a guess at the file s type see also the description of the mimetools module return the real url of the page in some cases the http server redirects a client to another url the urlopen function handles this transparently but in some cases the caller needs to know which url the client was redirected to the geturl method can be used to get at this redirected url return the http status code that was sent with the response or none if the url is not an http url function also used by urllib request to be able to set the check_hostname attribute on a context object send alpn extension to indicate http 1 1 protocol enable pha for tls 1 3 connections if available test whether a file like object is a text or a binary stream get the content length based on the body if the body is none we set content length 0 for methods that expect a body rfc 7230 section 3 3 2 we also set the content length for any method if the body is a str or bytes like object and not a file do an explicit check for not none here to distinguish between unset and set but empty file like object does it implement the buffer protocol bytes bytearray array this is stored as an instance variable to allow unit tests to replace it with a suitable mockup set up host and port for http connect tunnelling in a connection that uses http connect tunnelling the host passed to the constructor is used as a proxy server that relays all communication to the endpoint passed to set_tunnel this done by sending an http connect request to the proxy server when the connection is established this method must be called before the http connection has been established the headers argument should be a mapping of extra http headers to send with the connect request as http 1 1 is used for http connect tunnelling request as per the rfc https tools ietf org html rfc7231 section 4 3 6 a http host header must be provided matching the ity form of the request target provided as the destination for the connect request if a http host header is not provided via the headers argument one is generated and transmitted automatically ipv6 addresses have http foo com http foo com making a single send call instead of one per line encourages the host os to use a more optimal packet size instead of potentially emitting a series of small packets returns a dictionary with the headers of the response received from the proxy server to the connect request sent to set the tunnel if the connect request was not sent the method returns none connect to the host and port specified in __init__ might fail in oss that don t implement tcp_nodelay close the connection to the http server close it manually there may be other refs send data to the server data can be a string object a bytes object an array object a file like object that supports a read method or an iterable object add a line of output to the current request buffer assumes that the line does not end with r n send the currently buffered request and clear the buffer appends an extra r n to the buffer a message_body may be specified to be appended to the request create a consistent interface to message_body let file like take precedence over byte like this is needed to allow the current position of mmap ed files to be taken into account this is solely to check to see if message_body implements the buffer api it would be easier to capture if pyobject_checkbuffer was exposed to python the object implements the buffer interface and can be passed directly into socket methods chunked encoding end chunked transfer send a request to the server method specifies an http request method e g get url specifies the object being requested e g index html skip_host if true does not add automatically a host header skip_accept_encoding if true does not add automatically an accept encoding header if a prior response has been completed then forget about it in certain cases we cannot issue another request on this connection this occurs when 1 we are in the process of sending a request _cs_req_started 2 a response to a previous request has signalled that it is going to close the connection upon completion 3 the headers for the previous response have not been read thus we cannot determine whether point 2 is true _cs_req_sent if there is no prior response then we can request at will if point 2 is true then we will have passed the socket to the response effectively meaning there is no prior response and will open a new one when a new request is made note if a prior response exists then we can start a new request we are not allowed to begin fetching the response to this new request however until that prior response is complete save the method for use later in the response phase issue some standard headers for better http 1 1 compliance this header is issued only for http 1 1 connections more specifically this means it is only issued when the client uses the new httpconnection class backwards compat clients will be using http 1 0 and those clients may be issuing this header themselves we should not issue it twice some web servers such as apache barf when they see two host headers if we need a non standard port include it in the header if the request is going through a proxy but the host of the actual url not the host of the proxy as per rfc 273 ipv6 address should be wrapped with when used as host header note we are assuming that clients will not attempt to set these headers since this library must deal with the consequences this also means that when the supporting libraries are updated to recognize other forms then this code should be changed removed or updated we only want a content encoding of identity since we don t support encodings such as x gzip or x deflate we can accept chunked transfer encodings but no others note no te header implies only chunked self putheader te chunked if te is supplied in the header then it must appear in a connection header self putheader connection te for http 1 0 the server will assume not chunked ascii also helps prevent cve 2019 9740 validate a method name for putrequest prevent http header injection validate a url for putrequest prevent cve 2019 9740 validate a host so it doesn t contain control characters prevent cve 2019 18348 send a request header line to the server for example h putheader accept text html indicate that the last header line has been sent to the server this method sends the request to the server the optional message_body argument can be used to pass a message body associated with the request send a complete request to the server honor explicitly requested host and accept encoding headers chunked encoding will happen if http 1 1 is used and either the caller passes encode_chunked true or the following conditions hold 1 content length has not been explicitly set 2 the body is a file or iterable but not a str or bytes like 3 transfer encoding has not been explicitly set by the caller only chunk body if not explicitly set for backwards compatibility assuming the client code is already handling the chunking if content length cannot be automatically determined fall back to chunked encoding rfc 2616 section 3 7 1 says that text default has a default charset of iso 8859 1 get the response from the server if the httpconnection is in the correct state returns an instance of httpresponse or of whatever object is returned by the response_class variable if a request has not been sent or if a previous response has not be handled responsenotready is raised if the http response indicates that the connection should be closed then it will be closed before the response is returned when the connection is closed the underlying socket is closed if a prior response has been completed then forget about it if a prior response exists then it must be completed otherwise we cannot read this response s header to determine the connection close behavior note if a prior response existed but was connection close then the socket and response were made independent of this httpconnection object since a new request requires that we open a whole new connection this means the prior response had one of two states 1 will_close this connection was reset and the prior socket and response operate independently 2 persistent the response was retained and we await its isclosed status to become true this effectively passes the connection to the response remember this so we can tell when it is complete subclasses that define an __init__ must call exception __init__ or define self args otherwise str will fail for backwards compatibility
r import email.parser import email.message import errno import http import io import re import socket import sys import collections.abc from urllib.parse import urlsplit __all__ = ["HTTPResponse", "HTTPConnection", "HTTPException", "NotConnected", "UnknownProtocol", "UnknownTransferEncoding", "UnimplementedFileMode", "IncompleteRead", "InvalidURL", "ImproperConnectionState", "CannotSendRequest", "CannotSendHeader", "ResponseNotReady", "BadStatusLine", "LineTooLong", "RemoteDisconnected", "error", "responses"] HTTP_PORT = 80 HTTPS_PORT = 443 _UNKNOWN = 'UNKNOWN' _CS_IDLE = 'Idle' _CS_REQ_STARTED = 'Request-started' _CS_REQ_SENT = 'Request-sent' globals().update(http.HTTPStatus.__members__) responses = {v: v.phrase for v in http.HTTPStatus.__members__.values()} _MAXLINE = 65536 _MAXHEADERS = 100 _is_legal_header_name = re.compile(rb'[^:\s][^:\r\n]*').fullmatch _is_illegal_header_value = re.compile(rb'\n(?![ \t])|\r(?![ \t\n])').search _contains_disallowed_url_pchar_re = re.compile('[\x00-\x20\x7f]') _contains_disallowed_method_pchar_re = re.compile('[\x00-\x1f]') _METHODS_EXPECTING_BODY = {'PATCH', 'POST', 'PUT'} def _encode(data, name='data'): try: return data.encode("latin-1") except UnicodeEncodeError as err: raise UnicodeEncodeError( err.encoding, err.object, err.start, err.end, "%s (%.20r) is not valid Latin-1. Use %s.encode('utf-8') " "if you want to send it encoded in UTF-8." % (name.title(), data[err.start:err.end], name)) from None def _strip_ipv6_iface(enc_name: bytes) -> bytes: enc_name, percent, _ = enc_name.partition(b"%") if percent: assert enc_name.startswith(b'['), enc_name enc_name += b']' return enc_name class HTTPMessage(email.message.Message): def getallmatchingheaders(self, name): name = name.lower() + ':' n = len(name) lst = [] hit = 0 for line in self.keys(): if line[:n].lower() == name: hit = 1 elif not line[:1].isspace(): hit = 0 if hit: lst.append(line) return lst def _read_headers(fp): headers = [] while True: line = fp.readline(_MAXLINE + 1) if len(line) > _MAXLINE: raise LineTooLong("header line") headers.append(line) if len(headers) > _MAXHEADERS: raise HTTPException("got more than %d headers" % _MAXHEADERS) if line in (b'\r\n', b'\n', b''): break return headers def _parse_header_lines(header_lines, _class=HTTPMessage): hstring = b''.join(header_lines).decode('iso-8859-1') return email.parser.Parser(_class=_class).parsestr(hstring) def parse_headers(fp, _class=HTTPMessage): headers = _read_headers(fp) return _parse_header_lines(headers, _class) class HTTPResponse(io.BufferedIOBase): def __init__(self, sock, debuglevel=0, method=None, url=None): self.fp = sock.makefile("rb") self.debuglevel = debuglevel self._method = method self.headers = self.msg = None self.version = _UNKNOWN self.status = _UNKNOWN self.reason = _UNKNOWN self.chunked = _UNKNOWN self.chunk_left = _UNKNOWN self.length = _UNKNOWN self.will_close = _UNKNOWN def _read_status(self): line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1") if len(line) > _MAXLINE: raise LineTooLong("status line") if self.debuglevel > 0: print("reply:", repr(line)) if not line: raise RemoteDisconnected("Remote end closed connection without" " response") try: version, status, reason = line.split(None, 2) except ValueError: try: version, status = line.split(None, 1) reason = "" except ValueError: version = "" if not version.startswith("HTTP/"): self._close_conn() raise BadStatusLine(line) try: status = int(status) if status < 100 or status > 999: raise BadStatusLine(line) except ValueError: raise BadStatusLine(line) return version, status, reason def begin(self): if self.headers is not None: return while True: version, status, reason = self._read_status() if status != CONTINUE: break skipped_headers = _read_headers(self.fp) if self.debuglevel > 0: print("headers:", skipped_headers) del skipped_headers self.code = self.status = status self.reason = reason.strip() if version in ("HTTP/1.0", "HTTP/0.9"): self.version = 10 elif version.startswith("HTTP/1."): self.version = 11 else: raise UnknownProtocol(version) self.headers = self.msg = parse_headers(self.fp) if self.debuglevel > 0: for hdr, val in self.headers.items(): print("header:", hdr + ":", val) tr_enc = self.headers.get("transfer-encoding") if tr_enc and tr_enc.lower() == "chunked": self.chunked = True self.chunk_left = None else: self.chunked = False self.will_close = self._check_close() self.length = None length = self.headers.get("content-length") if length and not self.chunked: try: self.length = int(length) except ValueError: self.length = None else: if self.length < 0: self.length = None else: self.length = None if (status == NO_CONTENT or status == NOT_MODIFIED or 100 <= status < 200 or self._method == "HEAD"): self.length = 0 if (not self.will_close and not self.chunked and self.length is None): self.will_close = True def _check_close(self): conn = self.headers.get("connection") if self.version == 11: if conn and "close" in conn.lower(): return True return False if self.headers.get("keep-alive"): return False if conn and "keep-alive" in conn.lower(): return False pconn = self.headers.get("proxy-connection") if pconn and "keep-alive" in pconn.lower(): return False return True def _close_conn(self): fp = self.fp self.fp = None fp.close() def close(self): try: super().close() finally: if self.fp: self._close_conn() def flush(self): super().flush() if self.fp: self.fp.flush() def readable(self): return True def isclosed(self): return self.fp is None def read(self, amt=None): if self.fp is None: return b"" if self._method == "HEAD": self._close_conn() return b"" if self.chunked: return self._read_chunked(amt) if amt is not None: if self.length is not None and amt > self.length: amt = self.length s = self.fp.read(amt) if not s and amt: self._close_conn() elif self.length is not None: self.length -= len(s) if not self.length: self._close_conn() return s else: if self.length is None: s = self.fp.read() else: try: s = self._safe_read(self.length) except IncompleteRead: self._close_conn() raise self.length = 0 self._close_conn() return s def readinto(self, b): if self.fp is None: return 0 if self._method == "HEAD": self._close_conn() return 0 if self.chunked: return self._readinto_chunked(b) if self.length is not None: if len(b) > self.length: b = memoryview(b)[0:self.length] n = self.fp.readinto(b) if not n and b: self._close_conn() elif self.length is not None: self.length -= n if not self.length: self._close_conn() return n def _read_next_chunk_size(self): line = self.fp.readline(_MAXLINE + 1) if len(line) > _MAXLINE: raise LineTooLong("chunk size") i = line.find(b";") if i >= 0: line = line[:i] try: return int(line, 16) except ValueError: self._close_conn() raise def _read_and_discard_trailer(self): while True: line = self.fp.readline(_MAXLINE + 1) if len(line) > _MAXLINE: raise LineTooLong("trailer line") if not line: break if line in (b'\r\n', b'\n', b''): break def _get_chunk_left(self): chunk_left = self.chunk_left if not chunk_left: if chunk_left is not None: self._safe_read(2) try: chunk_left = self._read_next_chunk_size() except ValueError: raise IncompleteRead(b'') if chunk_left == 0: self._read_and_discard_trailer() self._close_conn() chunk_left = None self.chunk_left = chunk_left return chunk_left def _read_chunked(self, amt=None): assert self.chunked != _UNKNOWN value = [] try: while (chunk_left := self._get_chunk_left()) is not None: if amt is not None and amt <= chunk_left: value.append(self._safe_read(amt)) self.chunk_left = chunk_left - amt break value.append(self._safe_read(chunk_left)) if amt is not None: amt -= chunk_left self.chunk_left = 0 return b''.join(value) except IncompleteRead as exc: raise IncompleteRead(b''.join(value)) from exc def _readinto_chunked(self, b): assert self.chunked != _UNKNOWN total_bytes = 0 mvb = memoryview(b) try: while True: chunk_left = self._get_chunk_left() if chunk_left is None: return total_bytes if len(mvb) <= chunk_left: n = self._safe_readinto(mvb) self.chunk_left = chunk_left - n return total_bytes + n temp_mvb = mvb[:chunk_left] n = self._safe_readinto(temp_mvb) mvb = mvb[n:] total_bytes += n self.chunk_left = 0 except IncompleteRead: raise IncompleteRead(bytes(b[0:total_bytes])) def _safe_read(self, amt): data = self.fp.read(amt) if len(data) < amt: raise IncompleteRead(data, amt-len(data)) return data def _safe_readinto(self, b): amt = len(b) n = self.fp.readinto(b) if n < amt: raise IncompleteRead(bytes(b[:n]), amt-n) return n def read1(self, n=-1): if self.fp is None or self._method == "HEAD": return b"" if self.chunked: return self._read1_chunked(n) if self.length is not None and (n < 0 or n > self.length): n = self.length result = self.fp.read1(n) if not result and n: self._close_conn() elif self.length is not None: self.length -= len(result) return result def peek(self, n=-1): if self.fp is None or self._method == "HEAD": return b"" if self.chunked: return self._peek_chunked(n) return self.fp.peek(n) def readline(self, limit=-1): if self.fp is None or self._method == "HEAD": return b"" if self.chunked: return super().readline(limit) if self.length is not None and (limit < 0 or limit > self.length): limit = self.length result = self.fp.readline(limit) if not result and limit: self._close_conn() elif self.length is not None: self.length -= len(result) return result def _read1_chunked(self, n): chunk_left = self._get_chunk_left() if chunk_left is None or n == 0: return b'' if not (0 <= n <= chunk_left): n = chunk_left read = self.fp.read1(n) self.chunk_left -= len(read) if not read: raise IncompleteRead(b"") return read def _peek_chunked(self, n): try: chunk_left = self._get_chunk_left() except IncompleteRead: return b'' if chunk_left is None: return b'' return self.fp.peek(chunk_left)[:chunk_left] def fileno(self): return self.fp.fileno() def getheader(self, name, default=None): if self.headers is None: raise ResponseNotReady() headers = self.headers.get_all(name) or default if isinstance(headers, str) or not hasattr(headers, '__iter__'): return headers else: return ', '.join(headers) def getheaders(self): if self.headers is None: raise ResponseNotReady() return list(self.headers.items()) def __iter__(self): return self def info(self): return self.headers def geturl(self): return self.url def getcode(self): return self.status def _create_https_context(http_version): context = ssl._create_default_https_context() if http_version == 11: context.set_alpn_protocols(['http/1.1']) if context.post_handshake_auth is not None: context.post_handshake_auth = True return context class HTTPConnection: _http_vsn = 11 _http_vsn_str = 'HTTP/1.1' response_class = HTTPResponse default_port = HTTP_PORT auto_open = 1 debuglevel = 0 @staticmethod def _is_textIO(stream): return isinstance(stream, io.TextIOBase) @staticmethod def _get_content_length(body, method): if body is None: if method.upper() in _METHODS_EXPECTING_BODY: return 0 else: return None if hasattr(body, 'read'): return None try: mv = memoryview(body) return mv.nbytes except TypeError: pass if isinstance(body, str): return len(body) return None def __init__(self, host, port=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, source_address=None, blocksize=8192): self.timeout = timeout self.source_address = source_address self.blocksize = blocksize self.sock = None self._buffer = [] self.__response = None self.__state = _CS_IDLE self._method = None self._tunnel_host = None self._tunnel_port = None self._tunnel_headers = {} self._raw_proxy_headers = None (self.host, self.port) = self._get_hostport(host, port) self._validate_host(self.host) self._create_connection = socket.create_connection def set_tunnel(self, host, port=None, headers=None): if self.sock: raise RuntimeError("Can't set up tunnel for established connection") self._tunnel_host, self._tunnel_port = self._get_hostport(host, port) if headers: self._tunnel_headers = headers.copy() else: self._tunnel_headers.clear() if not any(header.lower() == "host" for header in self._tunnel_headers): encoded_host = self._tunnel_host.encode("idna").decode("ascii") self._tunnel_headers["Host"] = "%s:%d" % ( encoded_host, self._tunnel_port) def _get_hostport(self, host, port): if port is None: i = host.rfind(':') j = host.rfind(']') if i > j: try: port = int(host[i+1:]) except ValueError: if host[i+1:] == "": port = self.default_port else: raise InvalidURL("nonnumeric port: '%s'" % host[i+1:]) host = host[:i] else: port = self.default_port if host and host[0] == '[' and host[-1] == ']': host = host[1:-1] return (host, port) def set_debuglevel(self, level): self.debuglevel = level def _tunnel(self): connect = b"CONNECT %s:%d %s\r\n" % ( self._tunnel_host.encode("idna"), self._tunnel_port, self._http_vsn_str.encode("ascii")) headers = [connect] for header, value in self._tunnel_headers.items(): headers.append(f"{header}: {value}\r\n".encode("latin-1")) headers.append(b"\r\n") self.send(b"".join(headers)) del headers response = self.response_class(self.sock, method=self._method) try: (version, code, message) = response._read_status() self._raw_proxy_headers = _read_headers(response.fp) if self.debuglevel > 0: for header in self._raw_proxy_headers: print('header:', header.decode()) if code != http.HTTPStatus.OK: self.close() raise OSError(f"Tunnel connection failed: {code} {message.strip()}") finally: response.close() def get_proxy_response_headers(self): return ( _parse_header_lines(self._raw_proxy_headers) if self._raw_proxy_headers is not None else None ) def connect(self): sys.audit("http.client.connect", self, self.host, self.port) self.sock = self._create_connection( (self.host,self.port), self.timeout, self.source_address) try: self.sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1) except OSError as e: if e.errno != errno.ENOPROTOOPT: raise if self._tunnel_host: self._tunnel() def close(self): self.__state = _CS_IDLE try: sock = self.sock if sock: self.sock = None sock.close() finally: response = self.__response if response: self.__response = None response.close() def send(self, data): if self.sock is None: if self.auto_open: self.connect() else: raise NotConnected() if self.debuglevel > 0: print("send:", repr(data)) if hasattr(data, "read") : if self.debuglevel > 0: print("sending a readable") encode = self._is_textIO(data) if encode and self.debuglevel > 0: print("encoding file using iso-8859-1") while datablock := data.read(self.blocksize): if encode: datablock = datablock.encode("iso-8859-1") sys.audit("http.client.send", self, datablock) self.sock.sendall(datablock) return sys.audit("http.client.send", self, data) try: self.sock.sendall(data) except TypeError: if isinstance(data, collections.abc.Iterable): for d in data: self.sock.sendall(d) else: raise TypeError("data should be a bytes-like object " "or an iterable, got %r" % type(data)) def _output(self, s): self._buffer.append(s) def _read_readable(self, readable): if self.debuglevel > 0: print("reading a readable") encode = self._is_textIO(readable) if encode and self.debuglevel > 0: print("encoding file using iso-8859-1") while datablock := readable.read(self.blocksize): if encode: datablock = datablock.encode("iso-8859-1") yield datablock def _send_output(self, message_body=None, encode_chunked=False): self._buffer.extend((b"", b"")) msg = b"\r\n".join(self._buffer) del self._buffer[:] self.send(msg) if message_body is not None: if hasattr(message_body, 'read'): chunks = self._read_readable(message_body) else: try: memoryview(message_body) except TypeError: try: chunks = iter(message_body) except TypeError: raise TypeError("message_body should be a bytes-like " "object or an iterable, got %r" % type(message_body)) else: chunks = (message_body,) for chunk in chunks: if not chunk: if self.debuglevel > 0: print('Zero length chunk ignored') continue if encode_chunked and self._http_vsn == 11: chunk = f'{len(chunk):X}\r\n'.encode('ascii') + chunk \ + b'\r\n' self.send(chunk) if encode_chunked and self._http_vsn == 11: self.send(b'0\r\n\r\n') def putrequest(self, method, url, skip_host=False, skip_accept_encoding=False): if self.__response and self.__response.isclosed(): self.__response = None if self.__state == _CS_IDLE: self.__state = _CS_REQ_STARTED else: raise CannotSendRequest(self.__state) self._validate_method(method) self._method = method url = url or '/' self._validate_path(url) request = '%s %s %s' % (method, url, self._http_vsn_str) self._output(self._encode_request(request)) if self._http_vsn == 11: if not skip_host: netloc = '' if url.startswith('http'): nil, netloc, nil, nil, nil = urlsplit(url) if netloc: try: netloc_enc = netloc.encode("ascii") except UnicodeEncodeError: netloc_enc = netloc.encode("idna") self.putheader('Host', _strip_ipv6_iface(netloc_enc)) else: if self._tunnel_host: host = self._tunnel_host port = self._tunnel_port else: host = self.host port = self.port try: host_enc = host.encode("ascii") except UnicodeEncodeError: host_enc = host.encode("idna") if ":" in host: host_enc = b'[' + host_enc + b']' host_enc = _strip_ipv6_iface(host_enc) if port == self.default_port: self.putheader('Host', host_enc) else: host_enc = host_enc.decode("ascii") self.putheader('Host', "%s:%s" % (host_enc, port)) if not skip_accept_encoding: self.putheader('Accept-Encoding', 'identity') else: pass def _encode_request(self, request): return request.encode('ascii') def _validate_method(self, method): match = _contains_disallowed_method_pchar_re.search(method) if match: raise ValueError( f"method can't contain control characters. {method!r} " f"(found at least {match.group()!r})") def _validate_path(self, url): match = _contains_disallowed_url_pchar_re.search(url) if match: raise InvalidURL(f"URL can't contain control characters. {url!r} " f"(found at least {match.group()!r})") def _validate_host(self, host): match = _contains_disallowed_url_pchar_re.search(host) if match: raise InvalidURL(f"URL can't contain control characters. {host!r} " f"(found at least {match.group()!r})") def putheader(self, header, *values): if self.__state != _CS_REQ_STARTED: raise CannotSendHeader() if hasattr(header, 'encode'): header = header.encode('ascii') if not _is_legal_header_name(header): raise ValueError('Invalid header name %r' % (header,)) values = list(values) for i, one_value in enumerate(values): if hasattr(one_value, 'encode'): values[i] = one_value.encode('latin-1') elif isinstance(one_value, int): values[i] = str(one_value).encode('ascii') if _is_illegal_header_value(values[i]): raise ValueError('Invalid header value %r' % (values[i],)) value = b'\r\n\t'.join(values) header = header + b': ' + value self._output(header) def endheaders(self, message_body=None, *, encode_chunked=False): if self.__state == _CS_REQ_STARTED: self.__state = _CS_REQ_SENT else: raise CannotSendHeader() self._send_output(message_body, encode_chunked=encode_chunked) def request(self, method, url, body=None, headers={}, *, encode_chunked=False): self._send_request(method, url, body, headers, encode_chunked) def _send_request(self, method, url, body, headers, encode_chunked): header_names = frozenset(k.lower() for k in headers) skips = {} if 'host' in header_names: skips['skip_host'] = 1 if 'accept-encoding' in header_names: skips['skip_accept_encoding'] = 1 self.putrequest(method, url, **skips) if 'content-length' not in header_names: if 'transfer-encoding' not in header_names: encode_chunked = False content_length = self._get_content_length(body, method) if content_length is None: if body is not None: if self.debuglevel > 0: print('Unable to determine size of %r' % body) encode_chunked = True self.putheader('Transfer-Encoding', 'chunked') else: self.putheader('Content-Length', str(content_length)) else: encode_chunked = False for hdr, value in headers.items(): self.putheader(hdr, value) if isinstance(body, str): body = _encode(body, 'body') self.endheaders(body, encode_chunked=encode_chunked) def getresponse(self): if self.__response and self.__response.isclosed(): self.__response = None if self.__state != _CS_REQ_SENT or self.__response: raise ResponseNotReady(self.__state) if self.debuglevel > 0: response = self.response_class(self.sock, self.debuglevel, method=self._method) else: response = self.response_class(self.sock, method=self._method) try: try: response.begin() except ConnectionError: self.close() raise assert response.will_close != _UNKNOWN self.__state = _CS_IDLE if response.will_close: self.close() else: self.__response = response return response except: response.close() raise try: import ssl except ImportError: pass else: class HTTPSConnection(HTTPConnection): "This class allows communication via SSL." default_port = HTTPS_PORT def __init__(self, host, port=None, *, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, source_address=None, context=None, blocksize=8192): super(HTTPSConnection, self).__init__(host, port, timeout, source_address, blocksize=blocksize) if context is None: context = _create_https_context(self._http_vsn) self._context = context def connect(self): "Connect to a host on a given (SSL) port." super().connect() if self._tunnel_host: server_hostname = self._tunnel_host else: server_hostname = self.host self.sock = self._context.wrap_socket(self.sock, server_hostname=server_hostname) __all__.append("HTTPSConnection") class HTTPException(Exception): pass class NotConnected(HTTPException): pass class InvalidURL(HTTPException): pass class UnknownProtocol(HTTPException): def __init__(self, version): self.args = version, self.version = version class UnknownTransferEncoding(HTTPException): pass class UnimplementedFileMode(HTTPException): pass class IncompleteRead(HTTPException): def __init__(self, partial, expected=None): self.args = partial, self.partial = partial self.expected = expected def __repr__(self): if self.expected is not None: e = ', %i more expected' % self.expected else: e = '' return '%s(%i bytes read%s)' % (self.__class__.__name__, len(self.partial), e) __str__ = object.__str__ class ImproperConnectionState(HTTPException): pass class CannotSendRequest(ImproperConnectionState): pass class CannotSendHeader(ImproperConnectionState): pass class ResponseNotReady(ImproperConnectionState): pass class BadStatusLine(HTTPException): def __init__(self, line): if not line: line = repr(line) self.args = line, self.line = line class LineTooLong(HTTPException): def __init__(self, line_type): HTTPException.__init__(self, "got more than %d bytes when reading %s" % (_MAXLINE, line_type)) class RemoteDisconnected(ConnectionResetError, BadStatusLine): def __init__(self, *pos, **kw): BadStatusLine.__init__(self, "") ConnectionResetError.__init__(self, *pos, **kw) error = HTTPException
the idlelib package implements the idle application idle includes an interactive shell and editor starting with python 3 6 idle requires tcltk 8 5 or later use the files named idle to start idle the other files are private implementations their details are subject to change see pep 434 for more import them at your own risk set true by test test_idle
testing = False
idle main entry point run idle as python m idlelib this file does not work for 2 7 see issue 24212 this file does not work for 2 7 see issue 24212
import idlelib.pyshell idlelib.pyshell.main()
complete either attribute names or file names either on demand or after a userselected delay after a key character pop up a list of candidates modified keyword list is used in fetchcompletions two types of completions defined here for autocompletew import below tuples passed to opencompletions evalfunc complete wantwin mode this string includes all chars that may be in an identifier todo update this here and elsewhere id of delayed call and the index of the text insert when the delayed call was issued if delayedcompletionid is none there is no delayed call a modifier was pressed along with the tab or there is only previous whitespace on this line so tab find the completions and create the autocompletewindow return true if successful no syntax error or so found if complete is true then if there s nothing to complete and no start of completion won t open completions and return false if mode is given will open a completion list only in this mode cancel another delayed call if it exists find the beginning of the string fetchcompletions will look at the file system to determine whether the string value constitutes an actual file name xxx could consider raw strings here and unescape the string value if it s not raw find last separator or string start find string start return a pair of lists of completions for something the first list is a sublist of the second both are sorted if there is a python subprocess get the comp list there otherwise either fetchcompletions is running in the subprocess itself or it was called in an idle editorwindow before any script had been run the subprocess environment is that of the most recently run script if two unrelated modules are being edited some calltips in the current module may be inoperative if the module was not the last to run modified keyword list is used in fetch_completions in builtins context keywords two types of completions defined here for autocomplete_w import below tuples passed to open_completions evalfunc complete wantwin mode control space tab for attributes in quotes for file name this string includes all chars that may be in an identifier todo update this here and elsewhere not in subprocess or no gui test id of delayed call and the index of the text insert when the delayed call was issued if _delayed_completion_id is none there is no delayed call makes mocking easier a modifier was pressed along with the tab or there is only previous whitespace on this line so tab find the completions and create the autocompletewindow return true if successful no syntax error or so found if complete is true then if there s nothing to complete and no start of completion won t open completions and return false if mode is given will open a completion list only in this mode cancel another delayed call if it exists find the beginning of the string fetch_completions will look at the file system to determine whether the string value constitutes an actual file name xxx could consider raw strings here and unescape the string value if it s not raw find last separator or string start find string start need object with attributes return a pair of lists of completions for something the first list is a sublist of the second both are sorted if there is a python subprocess get the comp list there otherwise either fetch_completions is running in the subprocess itself or it was called in an idle editorwindow before any script had been run the subprocess environment is that of the most recently run script if two unrelated modules are being edited some calltips in the current module may be inoperative if the module was not the last to run main module names
import __main__ import keyword import os import string import sys completion_kwds = [s for s in keyword.kwlist if s not in {'True', 'False', 'None'}] completion_kwds.extend(('match', 'case')) completion_kwds.sort() ATTRS, FILES = 0, 1 from idlelib import autocomplete_w from idlelib.config import idleConf from idlelib.hyperparser import HyperParser FORCE = True, False, True, None TAB = False, True, True, None TRY_A = False, False, False, ATTRS TRY_F = False, False, False, FILES ID_CHARS = string.ascii_letters + string.digits + "_" SEPS = f"{os.sep}{os.altsep if os.altsep else ''}" TRIGGERS = f".{SEPS}" class AutoComplete: def __init__(self, editwin=None, tags=None): self.editwin = editwin if editwin is not None: self.text = editwin.text self.tags = tags self.autocompletewindow = None self._delayed_completion_id = None self._delayed_completion_index = None @classmethod def reload(cls): cls.popupwait = idleConf.GetOption( "extensions", "AutoComplete", "popupwait", type="int", default=0) def _make_autocomplete_window(self): return autocomplete_w.AutoCompleteWindow(self.text, tags=self.tags) def _remove_autocomplete_window(self, event=None): if self.autocompletewindow: self.autocompletewindow.hide_window() self.autocompletewindow = None def force_open_completions_event(self, event): "(^space) Open completion list, even if a function call is needed." self.open_completions(FORCE) return "break" def autocomplete_event(self, event): "(tab) Complete word or open list if multiple options." if hasattr(event, "mc_state") and event.mc_state or\ not self.text.get("insert linestart", "insert").strip(): return None if self.autocompletewindow and self.autocompletewindow.is_active(): self.autocompletewindow.complete() return "break" else: opened = self.open_completions(TAB) return "break" if opened else None def try_open_completions_event(self, event=None): "(./) Open completion list after pause with no movement." lastchar = self.text.get("insert-1c") if lastchar in TRIGGERS: args = TRY_A if lastchar == "." else TRY_F self._delayed_completion_index = self.text.index("insert") if self._delayed_completion_id is not None: self.text.after_cancel(self._delayed_completion_id) self._delayed_completion_id = self.text.after( self.popupwait, self._delayed_open_completions, args) def _delayed_open_completions(self, args): "Call open_completions if index unchanged." self._delayed_completion_id = None if self.text.index("insert") == self._delayed_completion_index: self.open_completions(args) def open_completions(self, args): evalfuncs, complete, wantwin, mode = args if self._delayed_completion_id is not None: self.text.after_cancel(self._delayed_completion_id) self._delayed_completion_id = None hp = HyperParser(self.editwin, "insert") curline = self.text.get("insert linestart", "insert") i = j = len(curline) if hp.is_in_string() and (not mode or mode==FILES): self._remove_autocomplete_window() mode = FILES while i and curline[i-1] not in "'\"" + SEPS: i -= 1 comp_start = curline[i:j] j = i while i and curline[i-1] not in "'\"": i -= 1 comp_what = curline[i:j] elif hp.is_in_code() and (not mode or mode==ATTRS): self._remove_autocomplete_window() mode = ATTRS while i and (curline[i-1] in ID_CHARS or ord(curline[i-1]) > 127): i -= 1 comp_start = curline[i:j] if i and curline[i-1] == '.': hp.set_index("insert-%dc" % (len(curline)-(i-1))) comp_what = hp.get_expression() if (not comp_what or (not evalfuncs and comp_what.find('(') != -1)): return None else: comp_what = "" else: return None if complete and not comp_what and not comp_start: return None comp_lists = self.fetch_completions(comp_what, mode) if not comp_lists[0]: return None self.autocompletewindow = self._make_autocomplete_window() return not self.autocompletewindow.show_window( comp_lists, "insert-%dc" % len(comp_start), complete, mode, wantwin) def fetch_completions(self, what, mode): try: rpcclt = self.editwin.flist.pyshell.interp.rpcclt except: rpcclt = None if rpcclt: return rpcclt.remotecall("exec", "get_the_completion_list", (what, mode), {}) else: if mode == ATTRS: if what == "": namespace = {**__main__.__builtins__.__dict__, **__main__.__dict__} bigl = eval("dir()", namespace) bigl.extend(completion_kwds) bigl.sort() if "__all__" in bigl: smalll = sorted(eval("__all__", namespace)) else: smalll = [s for s in bigl if s[:1] != '_'] else: try: entity = self.get_entity(what) bigl = dir(entity) bigl.sort() if "__all__" in bigl: smalll = sorted(entity.__all__) else: smalll = [s for s in bigl if s[:1] != '_'] except: return [], [] elif mode == FILES: if what == "": what = "." try: expandedpath = os.path.expanduser(what) bigl = os.listdir(expandedpath) bigl.sort() smalll = [s for s in bigl if s[:1] != '.'] except OSError: return [], [] if not smalll: smalll = bigl return smalll, bigl def get_entity(self, name): "Lookup name in a namespace spanning sys.modules and __main.dict__." return eval(name, {**sys.modules, **__main__.__dict__}) AutoComplete.reload() if __name__ == '__main__': from unittest import main main('idlelib.idle_test.test_autocomplete', verbosity=2)
an autocompletion window for idle used by the autocomplete extension we need to bind event beyond key so that the function will be called before the default specific idle function the widget text on which we place the autocompletewindow tags to mark inserted text with the widgets we create the default foreground and background of a selection saved because they are changed to the regular colors of list items when the completion start is not a prefix of the selected completion the list of completions a list with more completions or none the completion mode either autocomplete attrs or files the current completion start on the text box a string the index of the start of the completion the last typed start used so that when the selection changes the new start will be as close as possible to the last typed one do we have an indication that the user wants the completion window for example he clicked the list event ids flag set if last keypress was a tab flag set to avoid recursive configure callback invocations find the first index in self completions where completionsi is greater or equal to s or the last index if there is no such assuming that s is the prefix of a string in self completions return the longest string which is a prefix of all the strings which s is a prefix of them if s is not a prefix of a string return s there is not even one completion which s is a prefix of find the end of the range of completions where s is a prefix of we should return the maximum prefix of first and last call when the selection of the listbox has changed updates the listbox display and calls changestart start is a prefix of the selected completion if there are more completions show them and call me again show the autocomplete list bind events if complete is true complete the text and if there is exactly one matching completion don t open a list handle the start we already have there is exactly one matching completion prevent grabbing focus on macos acw updateidletasks need for tk8 6 8 on macos 40128 initialize the listbox selection bind events avoid running on recursive configure callback invocations since the configure event may occur after the completion window is gone catch potential tclerror exceptions when accessing acw see bpo41611 position the completion list window on windows an update call is needed for the completion list window to be created so that we can fetch its width and height however this is not needed on other platforms tested on ubuntu and macos but at one point began causing freezes on macos see issues 37849 and 41611 place acw below current line place acw above current line see issue 15786 when on windows platform tk will misbehave to call winconfigevent multiple times we need to prevent this otherwise mouse button double click will not be able to used see issue 734176 when user click on menu acw focusget will get keyerror hide autocomplete list if it exists and does not have focus or mouse click on widget text area on windows platform it will need to delay the check for acw focusget when click on acw otherwise it will return none and close the window buttonpress event only bind to self widget put the selected completion in the text and close the list normal editing of text keysym backspace if start is a prefix of the selection but is not when completing file names put the whole selected completion anyway close the list move the selection in the listbox two tabs in a row insert current selection and close acw first tab let autocomplete handle the completion a modifier key so ignore regular character with a nonlength1 keycode unknown event close the window and let it through if we didn t catch an event which moved the insert close window the selection doesn t change unbind events refocuson frame text see issue 15786 destroy widgets todo autocompletew htest here we need to bind event beyond key so that the function will be called before the default specific idle function the widget text on which we place the autocompletewindow tags to mark inserted text with the widgets we create the default foreground and background of a selection saved because they are changed to the regular colors of list items when the completion start is not a prefix of the selected completion the list of completions a list with more completions or none the completion mode either autocomplete attrs or files the current completion start on the text box a string the index of the start of the completion the last typed start used so that when the selection changes the new start will be as close as possible to the last typed one do we have an indication that the user wants the completion window for example he clicked the list event ids flag set if last keypress was a tab flag set to avoid recursive configure callback invocations find the first index in self completions where completions i is greater or equal to s or the last index if there is no such assuming that s is the prefix of a string in self completions return the longest string which is a prefix of all the strings which s is a prefix of them if s is not a prefix of a string return s there is not even one completion which s is a prefix of find the end of the range of completions where s is a prefix of only one possible completion we should return the maximum prefix of first and last call when the selection of the listbox has changed updates the listbox display and calls _change_start start is a prefix of the selected completion if there are more completions show them and call me again show the autocomplete list bind events if complete is true complete the text and if there is exactly one matching completion don t open a list handle the start we already have there is exactly one matching completion prevent grabbing focus on macos acw update_idletasks need for tk8 6 8 on macos 40128 work around bug in tk 8 5 18 issue 24570 initialize the listbox selection bind events avoid running on recursive configure callback invocations since the configure event may occur after the completion window is gone catch potential tclerror exceptions when accessing acw see bpo 41611 position the completion list window on windows an update call is needed for the completion list window to be created so that we can fetch its width and height however this is not needed on other platforms tested on ubuntu and macos but at one point began causing freezes on macos see issues 37849 and 41611 enough height below not enough height above place acw below current line place acw above current line see issue 15786 when on windows platform tk will misbehave to call winconfig_event multiple times we need to prevent this otherwise mouse button double click will not be able to used see issue 734176 when user click on menu acw focus_get will get keyerror hide autocomplete list if it exists and does not have focus or mouse click on widget text area on windows platform it will need to delay the check for acw focus_get when click on acw otherwise it will return none and close the window buttonpress event only bind to self widget put the selected completion in the text and close the list normal editing of text keysym backspace if start is a prefix of the selection but is not when completing file names put the whole selected completion anyway close the list move the selection in the listbox two tabs in a row insert current selection and close acw first tab let autocomplete handle the completion a modifier key so ignore regular character with a non length 1 keycode unknown event close the window and let it through if we didn t catch an event which moved the insert close window the selection doesn t change unbind events re focuson frame text see issue 15786 destroy widgets todo autocomplete w htest here
import platform from tkinter import * from tkinter.ttk import Scrollbar from idlelib.autocomplete import FILES, ATTRS from idlelib.multicall import MC_SHIFT HIDE_VIRTUAL_EVENT_NAME = "<<autocompletewindow-hide>>" HIDE_FOCUS_OUT_SEQUENCE = "<FocusOut>" HIDE_SEQUENCES = (HIDE_FOCUS_OUT_SEQUENCE, "<ButtonPress>") KEYPRESS_VIRTUAL_EVENT_NAME = "<<autocompletewindow-keypress>>" KEYPRESS_SEQUENCES = ("<Key>", "<Key-BackSpace>", "<Key-Return>", "<Key-Tab>", "<Key-Up>", "<Key-Down>", "<Key-Home>", "<Key-End>", "<Key-Prior>", "<Key-Next>", "<Key-Escape>") KEYRELEASE_VIRTUAL_EVENT_NAME = "<<autocompletewindow-keyrelease>>" KEYRELEASE_SEQUENCE = "<KeyRelease>" LISTUPDATE_SEQUENCE = "<B1-ButtonRelease>" WINCONFIG_SEQUENCE = "<Configure>" DOUBLECLICK_SEQUENCE = "<B1-Double-ButtonRelease>" class AutoCompleteWindow: def __init__(self, widget, tags): self.widget = widget self.tags = tags self.autocompletewindow = self.listbox = self.scrollbar = None self.origselforeground = self.origselbackground = None self.completions = None self.morecompletions = None self.mode = None self.start = None self.startindex = None self.lasttypedstart = None self.userwantswindow = None self.hideid = self.keypressid = self.listupdateid = \ self.winconfigid = self.keyreleaseid = self.doubleclickid = None self.lastkey_was_tab = False self.is_configuring = False def _change_start(self, newstart): min_len = min(len(self.start), len(newstart)) i = 0 while i < min_len and self.start[i] == newstart[i]: i += 1 if i < len(self.start): self.widget.delete("%s+%dc" % (self.startindex, i), "%s+%dc" % (self.startindex, len(self.start))) if i < len(newstart): self.widget.insert("%s+%dc" % (self.startindex, i), newstart[i:], self.tags) self.start = newstart def _binary_search(self, s): i = 0; j = len(self.completions) while j > i: m = (i + j) // 2 if self.completions[m] >= s: j = m else: i = m + 1 return min(i, len(self.completions)-1) def _complete_string(self, s): first = self._binary_search(s) if self.completions[first][:len(s)] != s: return s i = first + 1 j = len(self.completions) while j > i: m = (i + j) // 2 if self.completions[m][:len(s)] != s: j = m else: i = m + 1 last = i-1 if first == last: return self.completions[first] first_comp = self.completions[first] last_comp = self.completions[last] min_len = min(len(first_comp), len(last_comp)) i = len(s) while i < min_len and first_comp[i] == last_comp[i]: i += 1 return first_comp[:i] def _selection_changed(self): cursel = int(self.listbox.curselection()[0]) self.listbox.see(cursel) lts = self.lasttypedstart selstart = self.completions[cursel] if self._binary_search(lts) == cursel: newstart = lts else: min_len = min(len(lts), len(selstart)) i = 0 while i < min_len and lts[i] == selstart[i]: i += 1 newstart = selstart[:i] self._change_start(newstart) if self.completions[cursel][:len(self.start)] == self.start: self.listbox.configure(selectbackground=self.origselbackground, selectforeground=self.origselforeground) else: self.listbox.configure(selectbackground=self.listbox.cget("bg"), selectforeground=self.listbox.cget("fg")) if self.morecompletions: self.completions = self.morecompletions self.morecompletions = None self.listbox.delete(0, END) for item in self.completions: self.listbox.insert(END, item) self.listbox.select_set(self._binary_search(self.start)) self._selection_changed() def show_window(self, comp_lists, index, complete, mode, userWantsWin): self.completions, self.morecompletions = comp_lists self.mode = mode self.startindex = self.widget.index(index) self.start = self.widget.get(self.startindex, "insert") if complete: completed = self._complete_string(self.start) start = self.start self._change_start(completed) i = self._binary_search(completed) if self.completions[i] == completed and \ (i == len(self.completions)-1 or self.completions[i+1][:len(completed)] != completed): return completed == start self.userwantswindow = userWantsWin self.lasttypedstart = self.start self.autocompletewindow = acw = Toplevel(self.widget) acw.withdraw() acw.wm_overrideredirect(1) try: acw.tk.call("::tk::unsupported::MacWindowStyle", "style", acw._w, "help", "noActivates") except TclError: pass self.scrollbar = scrollbar = Scrollbar(acw, orient=VERTICAL) self.listbox = listbox = Listbox(acw, yscrollcommand=scrollbar.set, exportselection=False) for item in self.completions: listbox.insert(END, item) self.origselforeground = listbox.cget("selectforeground") self.origselbackground = listbox.cget("selectbackground") scrollbar.config(command=listbox.yview) scrollbar.pack(side=RIGHT, fill=Y) listbox.pack(side=LEFT, fill=BOTH, expand=True) acw.lift() self.listbox.select_set(self._binary_search(self.start)) self._selection_changed() self.hideaid = acw.bind(HIDE_VIRTUAL_EVENT_NAME, self.hide_event) self.hidewid = self.widget.bind(HIDE_VIRTUAL_EVENT_NAME, self.hide_event) acw.event_add(HIDE_VIRTUAL_EVENT_NAME, HIDE_FOCUS_OUT_SEQUENCE) for seq in HIDE_SEQUENCES: self.widget.event_add(HIDE_VIRTUAL_EVENT_NAME, seq) self.keypressid = self.widget.bind(KEYPRESS_VIRTUAL_EVENT_NAME, self.keypress_event) for seq in KEYPRESS_SEQUENCES: self.widget.event_add(KEYPRESS_VIRTUAL_EVENT_NAME, seq) self.keyreleaseid = self.widget.bind(KEYRELEASE_VIRTUAL_EVENT_NAME, self.keyrelease_event) self.widget.event_add(KEYRELEASE_VIRTUAL_EVENT_NAME,KEYRELEASE_SEQUENCE) self.listupdateid = listbox.bind(LISTUPDATE_SEQUENCE, self.listselect_event) self.is_configuring = False self.winconfigid = acw.bind(WINCONFIG_SEQUENCE, self.winconfig_event) self.doubleclickid = listbox.bind(DOUBLECLICK_SEQUENCE, self.doubleclick_event) return None def winconfig_event(self, event): if self.is_configuring: return self.is_configuring = True if not self.is_active(): return try: text = self.widget text.see(self.startindex) x, y, cx, cy = text.bbox(self.startindex) acw = self.autocompletewindow if platform.system().startswith('Windows'): acw.update() acw_width, acw_height = acw.winfo_width(), acw.winfo_height() text_width, text_height = text.winfo_width(), text.winfo_height() new_x = text.winfo_rootx() + min(x, max(0, text_width - acw_width)) new_y = text.winfo_rooty() + y if (text_height - (y + cy) >= acw_height or y < acw_height): new_y += cy else: new_y -= acw_height acw.wm_geometry("+%d+%d" % (new_x, new_y)) acw.deiconify() acw.update_idletasks() except TclError: pass if platform.system().startswith('Windows'): try: acw.unbind(WINCONFIG_SEQUENCE, self.winconfigid) except TclError: pass self.winconfigid = None self.is_configuring = False def _hide_event_check(self): if not self.autocompletewindow: return try: if not self.autocompletewindow.focus_get(): self.hide_window() except KeyError: self.hide_window() def hide_event(self, event): if self.is_active(): if event.type == EventType.FocusOut: self.widget.after(1, self._hide_event_check) elif event.type == EventType.ButtonPress: self.hide_window() def listselect_event(self, event): if self.is_active(): self.userwantswindow = True cursel = int(self.listbox.curselection()[0]) self._change_start(self.completions[cursel]) def doubleclick_event(self, event): cursel = int(self.listbox.curselection()[0]) self._change_start(self.completions[cursel]) self.hide_window() def keypress_event(self, event): if not self.is_active(): return None keysym = event.keysym if hasattr(event, "mc_state"): state = event.mc_state else: state = 0 if keysym != "Tab": self.lastkey_was_tab = False if (len(keysym) == 1 or keysym in ("underscore", "BackSpace") or (self.mode == FILES and keysym in ("period", "minus"))) \ and not (state & ~MC_SHIFT): if len(keysym) == 1: self._change_start(self.start + keysym) elif keysym == "underscore": self._change_start(self.start + '_') elif keysym == "period": self._change_start(self.start + '.') elif keysym == "minus": self._change_start(self.start + '-') else: if len(self.start) == 0: self.hide_window() return None self._change_start(self.start[:-1]) self.lasttypedstart = self.start self.listbox.select_clear(0, int(self.listbox.curselection()[0])) self.listbox.select_set(self._binary_search(self.start)) self._selection_changed() return "break" elif keysym == "Return": self.complete() self.hide_window() return 'break' elif (self.mode == ATTRS and keysym in ("period", "space", "parenleft", "parenright", "bracketleft", "bracketright")) or \ (self.mode == FILES and keysym in ("slash", "backslash", "quotedbl", "apostrophe")) \ and not (state & ~MC_SHIFT): cursel = int(self.listbox.curselection()[0]) if self.completions[cursel][:len(self.start)] == self.start \ and (self.mode == ATTRS or self.start): self._change_start(self.completions[cursel]) self.hide_window() return None elif keysym in ("Home", "End", "Prior", "Next", "Up", "Down") and \ not state: self.userwantswindow = True cursel = int(self.listbox.curselection()[0]) if keysym == "Home": newsel = 0 elif keysym == "End": newsel = len(self.completions)-1 elif keysym in ("Prior", "Next"): jump = self.listbox.nearest(self.listbox.winfo_height()) - \ self.listbox.nearest(0) if keysym == "Prior": newsel = max(0, cursel-jump) else: assert keysym == "Next" newsel = min(len(self.completions)-1, cursel+jump) elif keysym == "Up": newsel = max(0, cursel-1) else: assert keysym == "Down" newsel = min(len(self.completions)-1, cursel+1) self.listbox.select_clear(cursel) self.listbox.select_set(newsel) self._selection_changed() self._change_start(self.completions[newsel]) return "break" elif (keysym == "Tab" and not state): if self.lastkey_was_tab: cursel = int(self.listbox.curselection()[0]) self._change_start(self.completions[cursel]) self.hide_window() return "break" else: self.userwantswindow = True self.lastkey_was_tab = True return None elif any(s in keysym for s in ("Shift", "Control", "Alt", "Meta", "Command", "Option")): return None elif event.char and event.char >= ' ': self._change_start(self.start + event.char) self.lasttypedstart = self.start self.listbox.select_clear(0, int(self.listbox.curselection()[0])) self.listbox.select_set(self._binary_search(self.start)) self._selection_changed() return "break" else: self.hide_window() return None def keyrelease_event(self, event): if not self.is_active(): return if self.widget.index("insert") != \ self.widget.index("%s+%dc" % (self.startindex, len(self.start))): self.hide_window() def is_active(self): return self.autocompletewindow is not None def complete(self): self._change_start(self._complete_string(self.start)) def hide_window(self): if not self.is_active(): return self.autocompletewindow.event_delete(HIDE_VIRTUAL_EVENT_NAME, HIDE_FOCUS_OUT_SEQUENCE) for seq in HIDE_SEQUENCES: self.widget.event_delete(HIDE_VIRTUAL_EVENT_NAME, seq) self.autocompletewindow.unbind(HIDE_VIRTUAL_EVENT_NAME, self.hideaid) self.widget.unbind(HIDE_VIRTUAL_EVENT_NAME, self.hidewid) self.hideaid = None self.hidewid = None for seq in KEYPRESS_SEQUENCES: self.widget.event_delete(KEYPRESS_VIRTUAL_EVENT_NAME, seq) self.widget.unbind(KEYPRESS_VIRTUAL_EVENT_NAME, self.keypressid) self.keypressid = None self.widget.event_delete(KEYRELEASE_VIRTUAL_EVENT_NAME, KEYRELEASE_SEQUENCE) self.widget.unbind(KEYRELEASE_VIRTUAL_EVENT_NAME, self.keyreleaseid) self.keyreleaseid = None self.listbox.unbind(LISTUPDATE_SEQUENCE, self.listupdateid) self.listupdateid = None if self.winconfigid: self.autocompletewindow.unbind(WINCONFIG_SEQUENCE, self.winconfigid) self.winconfigid = None self.widget.focus_set() self.scrollbar.destroy() self.scrollbar = None self.listbox.destroy() self.listbox = None self.autocompletewindow.destroy() self.autocompletewindow = None if __name__ == '__main__': from unittest import main main('idlelib.idle_test.test_autocomplete_w', verbosity=2, exit=False)
complete the current word before the cursor with words in the editor each menu selection or shortcut key selection replaces the word with a different word with the same prefix the search for matches begins before the target and moves toward the top of the editor it then starts after the cursor and moves down it then returns to the original word and the cycle starts again changing the current text line or leaving the cursor in a different place before requesting the next selection causes autoexpand to reset its state there is only one instance of autoexpand search backwards through words before search onwards through words after warn we cycled around search backwards through words before search onwards through words after
import re import string class AutoExpand: wordchars = string.ascii_letters + string.digits + "_" def __init__(self, editwin): self.text = editwin.text self.bell = self.text.bell self.state = None def expand_word_event(self, event): "Replace the current word with the next expansion." curinsert = self.text.index("insert") curline = self.text.get("insert linestart", "insert lineend") if not self.state: words = self.getwords() index = 0 else: words, index, insert, line = self.state if insert != curinsert or line != curline: words = self.getwords() index = 0 if not words: self.bell() return "break" word = self.getprevword() self.text.delete("insert - %d chars" % len(word), "insert") newword = words[index] index = (index + 1) % len(words) if index == 0: self.bell() self.text.insert("insert", newword) curinsert = self.text.index("insert") curline = self.text.get("insert linestart", "insert lineend") self.state = words, index, curinsert, curline return "break" def getwords(self): "Return a list of words that match the prefix before the cursor." word = self.getprevword() if not word: return [] before = self.text.get("1.0", "insert wordstart") wbefore = re.findall(r"\b" + word + r"\w+\b", before) del before after = self.text.get("insert wordend", "end") wafter = re.findall(r"\b" + word + r"\w+\b", after) del after if not wbefore and not wafter: return [] words = [] dict = {} wbefore.reverse() for w in wbefore: if dict.get(w): continue words.append(w) dict[w] = w for w in wafter: if dict.get(w): continue words.append(w) dict[w] = w words.append(word) return words def getprevword(self): "Return the word prefix before the cursor." line = self.text.get("insert linestart", "insert") i = len(line) while i > 0 and line[i-1] in self.wordchars: i = i-1 return line[i:] if __name__ == '__main__': from unittest import main main('idlelib.idle_test.test_autoexpand', verbosity=2)
module browser xxx to do reparse when source changed maybe just a button would be ok or recheck on window popup add popup menu with more options e g doc strings base classes imports add base classes to class browser tree normally pyshell flist open but there is no pyshell flist for htest the browser depends on pyclbr and importlib which do not support pyi files transform a child dictionary to an ordered sequence of objects the dictionary maps names to pyclbr information objects filter out imported objects augment class names with bases the insertion order of the dictionary is assumed to have been in line number order so sorting is not necessary the current tree only calls this once per childdict as it saves treeitems once created a future tree and tests might violate this so a check prevents multiple inplace augmentations if obj name key it has already been suffixed browse module classes and functions in idle this class is also the base class for pathbrowser pathbrowser init and close are inherited other methods are overridden pathbrowser init does not call init below create a window for browsing a module s structure args master parent for widgets path full path of file to browse htest bool change box location when running htest utest bool suppress contents when running unittest global variables fileopen function used for opening a file instance variables name module name file full path and module with supported extension used in creating modulebrowsertreeitem as the rootnode for the tree and subsequently in the children create top create scrolled canvas browser tree for python module uses treeitem as the basis for the structure of the tree used by both browsers create a treeitem for the file args file full path and module name browser tree for child nodes within the module uses treeitem as the basis for the structure of the tree add nested objects for htest method item and class item use this normally pyshell flist open but there is no pyshell flist for htest the browser depends on pyclbr and importlib which do not support pyi files transform a child dictionary to an ordered sequence of objects the dictionary maps names to pyclbr information objects filter out imported objects augment class names with bases the insertion order of the dictionary is assumed to have been in line number order so sorting is not necessary the current tree only calls this once per child_dict as it saves treeitems once created a future tree and tests might violate this so a check prevents multiple in place augmentations use list since values should already be sorted if obj name key it has already been suffixed browse module classes and functions in idle this class is also the base class for pathbrowser pathbrowser init and close are inherited other methods are overridden pathbrowser __init__ does not call __init__ below create a window for browsing a module s structure args master parent for widgets path full path of file to browse _htest bool change box location when running htest utest bool suppress contents when running unittest global variables file_open function used for opening a file instance variables name module name file full path and module with supported extension used in creating modulebrowsertreeitem as the rootnode for the tree and subsequently in the children create top place dialog below parent if running htest create scrolled canvas browser tree for python module uses treeitem as the basis for the structure of the tree used by both browsers create a treeitem for the file args file full path and module name browser tree for child nodes within the module uses treeitem as the basis for the structure of the tree htest if pass file on command line add nested objects for htest if pass file on command line unittest fails
import os import pyclbr import sys from idlelib.config import idleConf from idlelib import pyshell from idlelib.tree import TreeNode, TreeItem, ScrolledCanvas from idlelib.util import py_extensions from idlelib.window import ListedToplevel file_open = None browseable_extension_blocklist = ('.pyi',) def is_browseable_extension(path): _, ext = os.path.splitext(path) ext = os.path.normcase(ext) return ext in py_extensions and ext not in browseable_extension_blocklist def transform_children(child_dict, modname=None): obs = [] for key, obj in child_dict.items(): if modname is None or obj.module == modname: if hasattr(obj, 'super') and obj.super and obj.name == key: supers = [] for sup in obj.super: if isinstance(sup, str): sname = sup else: sname = sup.name if sup.module != obj.module: sname = f'{sup.module}.{sname}' supers.append(sname) obj.name += '({})'.format(', '.join(supers)) obs.append(obj) return obs class ModuleBrowser: def __init__(self, master, path, *, _htest=False, _utest=False): self.master = master self.path = path self._htest = _htest self._utest = _utest self.init() def close(self, event=None): "Dismiss the window and the tree nodes." self.top.destroy() self.node.destroy() def init(self): "Create browser tkinter widgets, including the tree." global file_open root = self.master flist = (pyshell.flist if not (self._htest or self._utest) else pyshell.PyShellFileList(root)) file_open = flist.open pyclbr._modules.clear() self.top = top = ListedToplevel(root) top.protocol("WM_DELETE_WINDOW", self.close) top.bind("<Escape>", self.close) if self._htest: top.geometry("+%d+%d" % (root.winfo_rootx(), root.winfo_rooty() + 200)) self.settitle() top.focus_set() theme = idleConf.CurrentTheme() background = idleConf.GetHighlight(theme, 'normal')['background'] sc = ScrolledCanvas(top, bg=background, highlightthickness=0, takefocus=1) sc.frame.pack(expand=1, fill="both") item = self.rootnode() self.node = node = TreeNode(sc.canvas, None, item) if not self._utest: node.update() node.expand() def settitle(self): "Set the window title." self.top.wm_title("Module Browser - " + os.path.basename(self.path)) self.top.wm_iconname("Module Browser") def rootnode(self): "Return a ModuleBrowserTreeItem as the root of the tree." return ModuleBrowserTreeItem(self.path) class ModuleBrowserTreeItem(TreeItem): def __init__(self, file): self.file = file def GetText(self): "Return the module name as the text string to display." return os.path.basename(self.file) def GetIconName(self): "Return the name of the icon to display." return "python" def GetSubList(self): "Return ChildBrowserTreeItems for children." return [ChildBrowserTreeItem(obj) for obj in self.listchildren()] def OnDoubleClick(self): "Open a module in an editor window when double clicked." if not is_browseable_extension(self.file): return if not os.path.exists(self.file): return file_open(self.file) def IsExpandable(self): "Return True if Python file." return is_browseable_extension(self.file) def listchildren(self): "Return sequenced classes and functions in the module." if not is_browseable_extension(self.file): return [] dir, base = os.path.split(self.file) name, _ = os.path.splitext(base) try: tree = pyclbr.readmodule_ex(name, [dir] + sys.path) except ImportError: return [] return transform_children(tree, name) class ChildBrowserTreeItem(TreeItem): def __init__(self, obj): "Create a TreeItem for a pyclbr class/function object." self.obj = obj self.name = obj.name self.isfunction = isinstance(obj, pyclbr.Function) def GetText(self): "Return the name of the function/class to display." name = self.name if self.isfunction: return "def " + name + "(...)" else: return "class " + name def GetIconName(self): "Return the name of the icon to display." if self.isfunction: return "python" else: return "folder" def IsExpandable(self): "Return True if self.obj has nested objects." return self.obj.children != {} def GetSubList(self): "Return ChildBrowserTreeItems for children." return [ChildBrowserTreeItem(obj) for obj in transform_children(self.obj.children)] def OnDoubleClick(self): "Open module with file_open and position to lineno." try: edit = file_open(self.obj.file) edit.gotoline(self.obj.lineno) except (OSError, AttributeError): pass def _module_browser(parent): if len(sys.argv) > 1: file = sys.argv[1] else: file = __file__ class Nested_in_func(TreeNode): def nested_in_class(): pass def closure(): class Nested_in_closure: pass ModuleBrowser(parent, file, _htest=True) if __name__ == "__main__": if len(sys.argv) == 1: from unittest import main main('idlelib.idle_test.test_browser', verbosity=2, exit=False) from idlelib.idle_test.htest import run run(_module_browser)
pop up a reminder of how to call a function call tips are floating windows which display function class and method parameter and docstring information when you type an opening parenthesis and which disappear when you type a closing parenthesis see init for usage happens when it would be nice to open a calltip but not really necessary for example after an opening bracket so function calls won t be made maybe close an existing calltip and maybe open a new calltip called from forceopentryopenrefreshcalltipevent functions if not inside parentheses no calltip if a calltip is shown for the current parentheses do nothing no expression before the opening parenthesis e g because it s in a string or the opener for a tuple do nothing at this point the current index is after an opening parenthesis in a section of code preceded by a valid expression if there is a calltip shown it s not for the same index and should be closed simple fast heuristic if the preceding expression includes an opening parenthesis it likely includes a function call return the argument list and docstring of a function or class if there is a python subprocess get the calltip there otherwise either this fetchtip is running in the subprocess or it was called in an idle running without the subprocess the subprocess environment is that of the most recently run script if two unrelated modules are being edited some calltips in the current module may be inoperative if the module was not the last to run to find methods fetchtip must be fed a fully qualified name return the object corresponding to expression evaluated in a namespace spanning sys modules and main dict an uncaught exception closes idle and eval can raise any exception especially if user classes are involved the following are used in getargspec and some in tests return a string describing the signature of a callable object or for pythoncoded functions and methods the first line is introspected delete self parameter for classes init and bound methods the next lines are the first lines of the doc string up to the first empty line or maxlines for builtins this typically includes the arguments in addition to the return value determine function object fob to inspect for getargspectest testbuggygetattrclass calla callb initialize argspec and wrap it to get lines if fob has no argument use default callable argspec augment lines from docstring if any and join to get argspec subprocess and test see __init__ for usage happens when it would be nice to open a calltip but not really necessary for example after an opening bracket so function calls won t be made maybe close an existing calltip and maybe open a new calltip called from force_open try_open refresh _calltip_event functions if not inside parentheses no calltip if a calltip is shown for the current parentheses do nothing no expression before the opening parenthesis e g because it s in a string or the opener for a tuple do nothing at this point the current index is after an opening parenthesis in a section of code preceded by a valid expression if there is a calltip shown it s not for the same index and should be closed simple fast heuristic if the preceding expression includes an opening parenthesis it likely includes a function call return the argument list and docstring of a function or class if there is a python subprocess get the calltip there otherwise either this fetch_tip is running in the subprocess or it was called in an idle running without the subprocess the subprocess environment is that of the most recently run script if two unrelated modules are being edited some calltips in the current module may be inoperative if the module was not the last to run to find methods fetch_tip must be fed a fully qualified name return the object corresponding to expression evaluated in a namespace spanning sys modules and __main dict__ only protect user code an uncaught exception closes idle and eval can raise any exception especially if user classes are involved the following are used in get_argspec and some in tests enough for bytes for wrapped signatures return a string describing the signature of a callable object or for python coded functions and methods the first line is introspected delete self parameter for classes __init__ and bound methods the next lines are the first lines of the doc string up to the first empty line or _max_lines for builtins this typically includes the arguments in addition to the return value determine function object fob to inspect buggy user object could raise anything no popup for non callables for get_argspectest test_buggy_getattr_class calla callb initialize argspec and wrap it to get lines if fob has no argument use default callable argspec augment lines from docstring if any and join to get argspec
import __main__ import inspect import re import sys import textwrap import types from idlelib import calltip_w from idlelib.hyperparser import HyperParser class Calltip: def __init__(self, editwin=None): if editwin is None: self.editwin = None else: self.editwin = editwin self.text = editwin.text self.active_calltip = None self._calltip_window = self._make_tk_calltip_window def close(self): self._calltip_window = None def _make_tk_calltip_window(self): return calltip_w.CalltipWindow(self.text) def remove_calltip_window(self, event=None): if self.active_calltip: self.active_calltip.hidetip() self.active_calltip = None def force_open_calltip_event(self, event): "The user selected the menu entry or hotkey, open the tip." self.open_calltip(True) return "break" def try_open_calltip_event(self, event): self.open_calltip(False) def refresh_calltip_event(self, event): if self.active_calltip and self.active_calltip.tipwindow: self.open_calltip(False) def open_calltip(self, evalfuncs): hp = HyperParser(self.editwin, "insert") sur_paren = hp.get_surrounding_brackets('(') if not sur_paren: self.remove_calltip_window() return if self.active_calltip: opener_line, opener_col = map(int, sur_paren[0].split('.')) if ( (opener_line, opener_col) == (self.active_calltip.parenline, self.active_calltip.parencol) ): return hp.set_index(sur_paren[0]) try: expression = hp.get_expression() except ValueError: expression = None if not expression: return self.remove_calltip_window() if not evalfuncs and (expression.find('(') != -1): return argspec = self.fetch_tip(expression) if not argspec: return self.active_calltip = self._calltip_window() self.active_calltip.showtip(argspec, sur_paren[0], sur_paren[1]) def fetch_tip(self, expression): try: rpcclt = self.editwin.flist.pyshell.interp.rpcclt except AttributeError: rpcclt = None if rpcclt: return rpcclt.remotecall("exec", "get_the_calltip", (expression,), {}) else: return get_argspec(get_entity(expression)) def get_entity(expression): if expression: namespace = {**sys.modules, **__main__.__dict__} try: return eval(expression, namespace) except BaseException: return None _MAX_COLS = 85 _MAX_LINES = 5 _INDENT = ' '*4 _first_param = re.compile(r'(?<=\()\w*\,?\s*') _default_callable_argspec = "See source or doc" _invalid_method = "invalid method signature" def get_argspec(ob): try: ob_call = ob.__call__ except BaseException: return '' fob = ob_call if isinstance(ob_call, types.MethodType) else ob try: argspec = str(inspect.signature(fob)) except Exception as err: msg = str(err) if msg.startswith(_invalid_method): return _invalid_method else: argspec = '' if isinstance(fob, type) and argspec == '()': argspec = _default_callable_argspec lines = (textwrap.wrap(argspec, _MAX_COLS, subsequent_indent=_INDENT) if len(argspec) > _MAX_COLS else [argspec] if argspec else []) doc = inspect.getdoc(ob) if doc: for line in doc.split('\n', _MAX_LINES)[:_MAX_LINES]: line = line.strip() if not line: break if len(line) > _MAX_COLS: line = line[: _MAX_COLS - 3] + '...' lines.append(line) argspec = '\n'.join(lines) return argspec or _default_callable_argspec if __name__ == '__main__': from unittest import main main('idlelib.idle_test.test_calltip', verbosity=2)
debug user code with a gui interface to a subclass of bdb bdb the idb idb and debugger gui instances each need a reference to each other or to an rpc proxy for each other if idle is started with n so that user code and idb both run in the idle process debugger is called without an idb debugger init calls idb with its incomplete self idb init stores gui and gui then stores idb if idle is started normally so that user code executes in a separate process debuggerr startremotedebugger is called executing in the idle process it calls start the debugger in the remote process which calls idb with a gui proxy then debugger is called in the idle for more handle a user stopping or breaking at a line convert frame to a string and send it to gui handle an the occurrence of an exception if inrpccodeframe self setstep return message frame2messageframe self gui interactionmessage frame excinfo def inrpccodeframe determine if debugger is within rpc code if frame fcode cofilename count rpc py return true skip this frame else prevframe frame fback if prevframe is none return false prevname prevframe fcode cofilename if idlelib in prevname and debugger in prevname catch both idlelibdebugger py and idlelibdebuggerr py on both posix and windows return false return inrpccodeprevframe def frame2messageframe the debugger interface this class handles the drawing of the debugger window and the interactions with the underlying debugger session instantiate and draw a debugger window param pyshell an instance of the pyshell window type pyshell class idlelib pyshell pyshell param idb an instance of the idle debugger optional type idb class idlelib debugger idb run the debugger deal with the scenario where we ve already got a program running in the debugger and we want to start another if that is the case our second run was invoked from an event dispatched not from the main event loop but from the nested event loop in interaction below so our stack looks something like this outer main event loop run running program with traces callback to debugger s interaction nested event loop run for second command this kind of nesting of event loops causes all kinds of problems see e g issue 24455 especially when dealing with running as a subprocess where there s all kinds of extra stuff happening in there insert a traceback printstack to check it out by this point we ve already called restartsubprocess in scriptbinding however we also need to unwind the stack back to that outer event loop to accomplish this we return immediately from the nested run abortloop ensures the nested event loop will terminate the debugger s interaction routine completes normally the restartsubprocess will have taken care of stopping the running program which will also let the outer run complete that leaves us back at the outer main event loop at which point our after event can fire and we ll come back to this routine with a clean stack if self nestinglevel 0 self abortloop self root after100 lambda self runargs return try self interacting true return self idb runargs finally self interacting false def closeself eventnone clean up pyshell if user clicked debugger control close widget causes a harmless extra cycle through closedebugger if user toggled debugger from pyshell debug menu now close the debugger control window draw the debugger gui on the screen pyshell self pyshell self flist pyshell flist self root root pyshell root self top top listedtoplevelroot self top wmtitledebug control self top wmiconnamedebug top wmprotocolwmdeletewindow self close self top bindescape self close self bframe bframe frametop self bframe packanchorw self buttons bl self bcont b buttonbframe textgo commandself cont bl appendb self bstep b buttonbframe textstep commandself step bl appendb self bnext b buttonbframe textover commandself next bl appendb self bret b buttonbframe textout commandself ret bl appendb self bret b buttonbframe textquit commandself quit bl appendb for b in bl b configurestatedisabled b packsideleft self cframe cframe framebframe self cframe packsideleft if not self vstack self class vstack booleanvartop self vstack set1 self bstack checkbuttoncframe textstack commandself showstack variableself vstack self bstack gridrow0 column0 if not self vsource self class vsource booleanvartop self bsource checkbuttoncframe textsource commandself showsource variableself vsource self bsource gridrow0 column1 if not self vlocals self class vlocals booleanvartop self vlocals set1 self blocals checkbuttoncframe textlocals commandself showlocals variableself vlocals self blocals gridrow1 column0 if not self vglobals self class vglobals booleanvartop self bglobals checkbuttoncframe textglobals commandself showglobals variableself vglobals self bglobals gridrow1 column1 self status labeltop anchorw self status packanchorw self error labeltop anchorw self error packanchorw fillx self errorbg self error cgetbackground self fstack frametop height1 self fstack packexpand1 fillboth self flocals frametop self flocals packexpand1 fillboth self fglobals frametop height1 self fglobals packexpand1 fillboth if self vstack get self showstack if self vlocals get self showlocals if self vglobals get self showglobals def interactionself message frame infonone self frame frame self status configuretextmessage if info type value tb info try m1 type name except attributeerror m1 s strtype if value is not none try todo redo entire section tries not needed m1 fm1 value except pass bg yellow else m1 tb none bg self errorbg self error configuretextm1 backgroundbg sv self stackviewer if sv stack i self idb getstackself frame tb sv loadstackstack i self showvariables1 if self vsource get self syncsourceline for b in self buttons b configurestatenormal self top wakeup nested main loop tkinter s main loop is not reentrant so use tcl s vwait facility which reenters the event loop until an event handler sets the variable we re waiting on self nestinglevel 1 self root tk call vwait idledebugwait self nestinglevel 1 for b in self buttons b configurestatedisabled self status configuretext self error configuretext backgroundself errorbg self frame none def syncsourcelineself frame self frame if not frame return filename lineno self frame2filelineframe if filename 1 filename1 and os path existsfilename self flist gotofilelinefilename lineno def frame2filelineself frame code frame fcode filename code cofilename lineno frame flineno return filename lineno def contself self idb setcontinue self abortloop def stepself self idb setstep self abortloop def nextself self idb setnextself frame self abortloop def retself self idb setreturnself frame self abortloop def quitself self idb setquit self abortloop def abortloopself self root tk call set idledebugwait 1 def showstackself if not self stackviewer and self vstack get self stackviewer sv stackviewerself fstack self flist self if self frame stack i self idb getstackself frame none sv loadstackstack i else sv self stackviewer if sv and not self vstack get self stackviewer none sv close self fstack height 1 def showsourceself if self vsource get self syncsourceline def showframeself stackitem self frame stackitem0 lineno is stackitem1 self showvariables def showlocalsself lv self localsviewer if self vlocals get if not lv self localsviewer namespaceviewerself flocals locals else if lv self localsviewer none lv close self flocals height 1 self showvariables def showglobalsself gv self globalsviewer if self vglobals get if not gv self globalsviewer namespaceviewerself fglobals globals else if gv self globalsviewer none gv close self fglobals height 1 self showvariables def showvariablesself force0 lv self localsviewer gv self globalsviewer frame self frame if not frame ldict gdict none else ldict frame flocals gdict frame fglobals if lv and gv and ldict is gdict ldict none if lv lv loaddictldict force self pyshell interp rpcclt if gv gv loaddictgdict force self pyshell interp rpcclt def setbreakpointself filename lineno self idb setbreakfilename lineno def clearbreakpointself filename lineno self idb clearbreakfilename lineno def clearfilebreaksself filename self idb clearallfilebreaksfilename def loadbreakpointsself at least on with the stock aquatk version on osx 10 4 you ll get a shaking gui that eventually kills idle if the width argument is specified names sorteddict because of temporary limitations on the dictkeys type not yet public or pickleable have the subprocess to send a list of keys not a dictkeys object sorted will take a dictkeys no subprocess or a list there is also an obscure bug in sorteddict where the interpreter gets into a loop requesting nonexisting dict0 dict1 dict2 etc from the debuggerr dictproxy todo recheck above see debuggerr 159ff debugobj 60 strip extra quotes caused by calling repr on the already repr d value sent across the rpc interface xxx could we use a configure callback for the following todo htest an instance of debugger or proxy thereof handle a user stopping or breaking at a line convert frame to a string and send it to gui when closing debugger window with x in 3 x handle an the occurrence of an exception skip this frame catch both idlelib debugger py and idlelib debugger_r py on both posix and windows return a message string for frame the debugger interface this class handles the drawing of the debugger window and the interactions with the underlying debugger session instantiate and draw a debugger window param pyshell an instance of the pyshell window type pyshell class idlelib pyshell pyshell param idb an instance of the idle debugger optional type idb class idlelib debugger idb if passed a proxy of remote instance run the debugger deal with the scenario where we ve already got a program running in the debugger and we want to start another if that is the case our second run was invoked from an event dispatched not from the main event loop but from the nested event loop in interaction below so our stack looks something like this outer main event loop run running program with traces callback to debugger s interaction nested event loop run for second command this kind of nesting of event loops causes all kinds of problems see e g issue 24455 especially when dealing with running as a subprocess where there s all kinds of extra stuff happening in there insert a traceback print_stack to check it out by this point we ve already called restart_subprocess in scriptbinding however we also need to unwind the stack back to that outer event loop to accomplish this we return immediately from the nested run abort_loop ensures the nested event loop will terminate the debugger s interaction routine completes normally the restart_subprocess will have taken care of stopping the running program which will also let the outer run complete that leaves us back at the outer main event loop at which point our after event can fire and we ll come back to this routine with a clean stack close the debugger and window clean up pyshell if user clicked debugger control close widget causes a harmless extra cycle through close_debugger if user toggled debugger from pyshell debug menu now close the debugger control window draw the debugger gui on the screen todo redo entire section tries not needed nested main loop tkinter s main loop is not reentrant so use tcl s vwait facility which reenters the event loop until an event handler sets the variable we re waiting on lineno is stackitem 1 set a filename lineno breakpoint in the debugger called from self load_breakpoints and ew setbreakpoint load pyshelleditorwindow breakpoints into subprocess debugger at least on with the stock aquatk version on osx 10 4 you ll get a shaking gui that eventually kills idle if the width argument is specified xxx 20 observed height of entry widget names sorted dict because of temporary limitations on the dict_keys type not yet public or pickleable have the subprocess to send a list of keys not a dict_keys object sorted will take a dict_keys no subprocess or a list there is also an obscure bug in sorted dict where the interpreter gets into a loop requesting non existing dict 0 dict 1 dict 2 etc from the debugger_r dictproxy todo recheck above see debugger_r 159ff debugobj 60 repr value strip extra quotes caused by calling repr on the already repr d value sent across the rpc interface xxx could we use a configure callback for the following alas todo htest
import bdb import os from tkinter import * from tkinter.ttk import Frame, Scrollbar from idlelib import macosx from idlelib.scrolledlist import ScrolledList from idlelib.window import ListedToplevel class Idb(bdb.Bdb): "Supply user_line and user_exception functions for Bdb." def __init__(self, gui): self.gui = gui super().__init__() def user_line(self, frame): if _in_rpc_code(frame): self.set_step() return message = _frame2message(frame) try: self.gui.interaction(message, frame) except TclError: pass def user_exception(self, frame, exc_info): if _in_rpc_code(frame): self.set_step() return message = _frame2message(frame) self.gui.interaction(message, frame, exc_info) def _in_rpc_code(frame): "Determine if debugger is within RPC code." if frame.f_code.co_filename.count('rpc.py'): return True else: prev_frame = frame.f_back if prev_frame is None: return False prev_name = prev_frame.f_code.co_filename if 'idlelib' in prev_name and 'debugger' in prev_name: return False return _in_rpc_code(prev_frame) def _frame2message(frame): code = frame.f_code filename = code.co_filename lineno = frame.f_lineno basename = os.path.basename(filename) message = f"{basename}:{lineno}" if code.co_name != "?": message = f"{message}: {code.co_name}()" return message class Debugger: vstack = None vsource = None vlocals = None vglobals = None stackviewer = None localsviewer = None globalsviewer = None def __init__(self, pyshell, idb=None): if idb is None: idb = Idb(self) self.pyshell = pyshell self.idb = idb self.frame = None self.make_gui() self.interacting = False self.nesting_level = 0 def run(self, *args): if self.nesting_level > 0: self.abort_loop() self.root.after(100, lambda: self.run(*args)) return try: self.interacting = True return self.idb.run(*args) finally: self.interacting = False def close(self, event=None): try: self.quit() except Exception: pass if self.interacting: self.top.bell() return if self.stackviewer: self.stackviewer.close(); self.stackviewer = None self.pyshell.close_debugger() self.top.destroy() def make_gui(self): pyshell = self.pyshell self.flist = pyshell.flist self.root = root = pyshell.root self.top = top = ListedToplevel(root) self.top.wm_title("Debug Control") self.top.wm_iconname("Debug") top.wm_protocol("WM_DELETE_WINDOW", self.close) self.top.bind("<Escape>", self.close) self.bframe = bframe = Frame(top) self.bframe.pack(anchor="w") self.buttons = bl = [] self.bcont = b = Button(bframe, text="Go", command=self.cont) bl.append(b) self.bstep = b = Button(bframe, text="Step", command=self.step) bl.append(b) self.bnext = b = Button(bframe, text="Over", command=self.next) bl.append(b) self.bret = b = Button(bframe, text="Out", command=self.ret) bl.append(b) self.bret = b = Button(bframe, text="Quit", command=self.quit) bl.append(b) for b in bl: b.configure(state="disabled") b.pack(side="left") self.cframe = cframe = Frame(bframe) self.cframe.pack(side="left") if not self.vstack: self.__class__.vstack = BooleanVar(top) self.vstack.set(1) self.bstack = Checkbutton(cframe, text="Stack", command=self.show_stack, variable=self.vstack) self.bstack.grid(row=0, column=0) if not self.vsource: self.__class__.vsource = BooleanVar(top) self.bsource = Checkbutton(cframe, text="Source", command=self.show_source, variable=self.vsource) self.bsource.grid(row=0, column=1) if not self.vlocals: self.__class__.vlocals = BooleanVar(top) self.vlocals.set(1) self.blocals = Checkbutton(cframe, text="Locals", command=self.show_locals, variable=self.vlocals) self.blocals.grid(row=1, column=0) if not self.vglobals: self.__class__.vglobals = BooleanVar(top) self.bglobals = Checkbutton(cframe, text="Globals", command=self.show_globals, variable=self.vglobals) self.bglobals.grid(row=1, column=1) self.status = Label(top, anchor="w") self.status.pack(anchor="w") self.error = Label(top, anchor="w") self.error.pack(anchor="w", fill="x") self.errorbg = self.error.cget("background") self.fstack = Frame(top, height=1) self.fstack.pack(expand=1, fill="both") self.flocals = Frame(top) self.flocals.pack(expand=1, fill="both") self.fglobals = Frame(top, height=1) self.fglobals.pack(expand=1, fill="both") if self.vstack.get(): self.show_stack() if self.vlocals.get(): self.show_locals() if self.vglobals.get(): self.show_globals() def interaction(self, message, frame, info=None): self.frame = frame self.status.configure(text=message) if info: type, value, tb = info try: m1 = type.__name__ except AttributeError: m1 = "%s" % str(type) if value is not None: try: m1 = f"{m1}: {value}" except: pass bg = "yellow" else: m1 = "" tb = None bg = self.errorbg self.error.configure(text=m1, background=bg) sv = self.stackviewer if sv: stack, i = self.idb.get_stack(self.frame, tb) sv.load_stack(stack, i) self.show_variables(1) if self.vsource.get(): self.sync_source_line() for b in self.buttons: b.configure(state="normal") self.top.wakeup() self.nesting_level += 1 self.root.tk.call('vwait', '::idledebugwait') self.nesting_level -= 1 for b in self.buttons: b.configure(state="disabled") self.status.configure(text="") self.error.configure(text="", background=self.errorbg) self.frame = None def sync_source_line(self): frame = self.frame if not frame: return filename, lineno = self.__frame2fileline(frame) if filename[:1] + filename[-1:] != "<>" and os.path.exists(filename): self.flist.gotofileline(filename, lineno) def __frame2fileline(self, frame): code = frame.f_code filename = code.co_filename lineno = frame.f_lineno return filename, lineno def cont(self): self.idb.set_continue() self.abort_loop() def step(self): self.idb.set_step() self.abort_loop() def next(self): self.idb.set_next(self.frame) self.abort_loop() def ret(self): self.idb.set_return(self.frame) self.abort_loop() def quit(self): self.idb.set_quit() self.abort_loop() def abort_loop(self): self.root.tk.call('set', '::idledebugwait', '1') def show_stack(self): if not self.stackviewer and self.vstack.get(): self.stackviewer = sv = StackViewer(self.fstack, self.flist, self) if self.frame: stack, i = self.idb.get_stack(self.frame, None) sv.load_stack(stack, i) else: sv = self.stackviewer if sv and not self.vstack.get(): self.stackviewer = None sv.close() self.fstack['height'] = 1 def show_source(self): if self.vsource.get(): self.sync_source_line() def show_frame(self, stackitem): self.frame = stackitem[0] self.show_variables() def show_locals(self): lv = self.localsviewer if self.vlocals.get(): if not lv: self.localsviewer = NamespaceViewer(self.flocals, "Locals") else: if lv: self.localsviewer = None lv.close() self.flocals['height'] = 1 self.show_variables() def show_globals(self): gv = self.globalsviewer if self.vglobals.get(): if not gv: self.globalsviewer = NamespaceViewer(self.fglobals, "Globals") else: if gv: self.globalsviewer = None gv.close() self.fglobals['height'] = 1 self.show_variables() def show_variables(self, force=0): lv = self.localsviewer gv = self.globalsviewer frame = self.frame if not frame: ldict = gdict = None else: ldict = frame.f_locals gdict = frame.f_globals if lv and gv and ldict is gdict: ldict = None if lv: lv.load_dict(ldict, force, self.pyshell.interp.rpcclt) if gv: gv.load_dict(gdict, force, self.pyshell.interp.rpcclt) def set_breakpoint(self, filename, lineno): self.idb.set_break(filename, lineno) def clear_breakpoint(self, filename, lineno): self.idb.clear_break(filename, lineno) def clear_file_breaks(self, filename): self.idb.clear_all_file_breaks(filename) def load_breakpoints(self): for editwin in self.pyshell.flist.inversedict: filename = editwin.io.filename try: for lineno in editwin.breakpoints: self.set_breakpoint(filename, lineno) except AttributeError: continue class StackViewer(ScrolledList): "Code stack viewer for debugger GUI." def __init__(self, master, flist, gui): if macosx.isAquaTk(): ScrolledList.__init__(self, master) else: ScrolledList.__init__(self, master, width=80) self.flist = flist self.gui = gui self.stack = [] def load_stack(self, stack, index=None): self.stack = stack self.clear() for i in range(len(stack)): frame, lineno = stack[i] try: modname = frame.f_globals["__name__"] except: modname = "?" code = frame.f_code filename = code.co_filename funcname = code.co_name import linecache sourceline = linecache.getline(filename, lineno) sourceline = sourceline.strip() if funcname in ("?", "", None): item = "%s, line %d: %s" % (modname, lineno, sourceline) else: item = "%s.%s(), line %d: %s" % (modname, funcname, lineno, sourceline) if i == index: item = "> " + item self.append(item) if index is not None: self.select(index) def popup_event(self, event): "Override base method." if self.stack: return ScrolledList.popup_event(self, event) def fill_menu(self): "Override base method." menu = self.menu menu.add_command(label="Go to source line", command=self.goto_source_line) menu.add_command(label="Show stack frame", command=self.show_stack_frame) def on_select(self, index): "Override base method." if 0 <= index < len(self.stack): self.gui.show_frame(self.stack[index]) def on_double(self, index): "Override base method." self.show_source(index) def goto_source_line(self): index = self.listbox.index("active") self.show_source(index) def show_stack_frame(self): index = self.listbox.index("active") if 0 <= index < len(self.stack): self.gui.show_frame(self.stack[index]) def show_source(self, index): if not (0 <= index < len(self.stack)): return frame, lineno = self.stack[index] code = frame.f_code filename = code.co_filename if os.path.isfile(filename): edit = self.flist.open(filename) if edit: edit.gotoline(lineno) class NamespaceViewer: "Global/local namespace viewer for debugger GUI." def __init__(self, master, title, dict=None): width = 0 height = 40 if dict: height = 20*len(dict) self.master = master self.title = title import reprlib self.repr = reprlib.Repr() self.repr.maxstring = 60 self.repr.maxother = 60 self.frame = frame = Frame(master) self.frame.pack(expand=1, fill="both") self.label = Label(frame, text=title, borderwidth=2, relief="groove") self.label.pack(fill="x") self.vbar = vbar = Scrollbar(frame, name="vbar") vbar.pack(side="right", fill="y") self.canvas = canvas = Canvas(frame, height=min(300, max(40, height)), scrollregion=(0, 0, width, height)) canvas.pack(side="left", fill="both", expand=1) vbar["command"] = canvas.yview canvas["yscrollcommand"] = vbar.set self.subframe = subframe = Frame(canvas) self.sfid = canvas.create_window(0, 0, window=subframe, anchor="nw") self.load_dict(dict) dict = -1 def load_dict(self, dict, force=0, rpc_client=None): if dict is self.dict and not force: return subframe = self.subframe frame = self.frame for c in list(subframe.children.values()): c.destroy() self.dict = None if not dict: l = Label(subframe, text="None") l.grid(row=0, column=0) else: keys_list = dict.keys() names = sorted(keys_list) row = 0 for name in names: value = dict[name] svalue = self.repr.repr(value) if rpc_client: svalue = svalue[1:-1] l = Label(subframe, text=name) l.grid(row=row, column=0, sticky="nw") l = Entry(subframe, width=0, borderwidth=0) l.insert(0, svalue) l.grid(row=row, column=1, sticky="nw") row = row+1 self.dict = dict subframe.update_idletasks() width = subframe.winfo_reqwidth() height = subframe.winfo_reqheight() canvas = self.canvas self.canvas["scrollregion"] = (0, 0, width, height) if height > 300: canvas["height"] = 300 frame.pack(expand=1) else: canvas["height"] = height frame.pack(expand=0) def close(self): self.frame.destroy() if __name__ == "__main__": from unittest import main main('idlelib.idle_test.test_debugger', verbosity=2, exit=False)
support for remote python debugging some ascii art to describe the structure in python subprocess in idle process oid guiadapter guiproxy remotecall guiadapter calls gui calls idb calls calls idbadapter remotecall idbproxy oid idbadapter the purpose of the proxy and adapter classes is to translate certain arguments and return values that cannot be transported through the rpc barrier in particular frame and traceback objects in the python subprocess calls rpc socketio remotecall via run myhandler instance pass frame and traceback object ids instead of the objects themselves called by an idbproxy called by a frameproxy called by a codeproxy called by a dictproxy dict dicttabledid return dict keys needed until dictkeys is type is finished and pickealable will probably need to extend rpc py socketio proxify at that time end class idbadapter start the debugger and its rpc link in the python subprocess start the subprocess side of the split debugger and set up that side of the rpc link by instantiating the guiproxy idb debugger and idbadapter objects and linking them together register the idbadapter with the rpcserver to handle rpc requests from the split debugger gui via the idbproxy in the idle process def keysself return self conn remotecallself oid dictkeys self did temporary until dictkeys is a pickleable builtin type print failed dictproxy getattr name print interaction s s s message fid modifiedinfo print idbproxy call s s s methodname args kwargs print idbproxy call s returns r methodname value ignores locals on purpose passing frame and traceback ids not the objects themselves start the subprocess debugger initialize the debugger gui and rpc link request the rpcserver start the python subprocess debugger and link set up the idle side of the split debugger by instantiating the idbproxy debugger gui and debugger guiadapter objects and linking them together register the guiadapter with the rpcclient to handle debugger gui interaction requests coming from the subprocess debugger via the guiproxy the idbadapter will pass execution and environment requests coming from the idle debugger gui to the subprocess debugger via the idbproxy shut down subprocess debugger and idle side of debugger rpc link request that the rpcserver shut down the subprocess debugger and link unregister the guiadapter which will cause a gc on the idle process debugger and rpc link objects the second reference to the debugger gui is deleted in pyshell closeremotedebugger in the python subprocess calls rpc socketio remotecall via run myhandler instance pass frame and traceback object ids instead of the objects themselves called by an idbproxy called by a frameproxy called by a codeproxy called by a dictproxy dict dicttable did return dict keys needed until dict_keys is type is finished and pickealable will probably need to extend rpc py socketio _proxify at that time can t pickle module builtins end class idbadapter start the debugger and its rpc link in the python subprocess start the subprocess side of the split debugger and set up that side of the rpc link by instantiating the guiproxy idb debugger and idbadapter objects and linking them together register the idbadapter with the rpcserver to handle rpc requests from the split debugger gui via the idbproxy in the idle process def keys self return self _conn remotecall self _oid dict_keys self _did temporary until dict_keys is a pickleable built in type print failed dictproxy __getattr__ name print interaction s s s message fid modified_info print idbproxy call s s s methodname args kwargs print idbproxy call s returns r methodname value ignores locals on purpose passing frame and traceback ids not the objects themselves start the subprocess debugger initialize the debugger gui and rpc link request the rpcserver start the python subprocess debugger and link set up the idle side of the split debugger by instantiating the idbproxy debugger gui and debugger guiadapter objects and linking them together register the guiadapter with the rpcclient to handle debugger gui interaction requests coming from the subprocess debugger via the guiproxy the idbadapter will pass execution and environment requests coming from the idle debugger gui to the subprocess debugger via the idbproxy shut down subprocess debugger and idle side of debugger rpc link request that the rpcserver shut down the subprocess debugger and link unregister the guiadapter which will cause a gc on the idle process debugger and rpc link objects the second reference to the debugger gui is deleted in pyshell close_remote_debugger
import reprlib import types from idlelib import debugger debugging = 0 idb_adap_oid = "idb_adapter" gui_adap_oid = "gui_adapter" frametable = {} dicttable = {} codetable = {} tracebacktable = {} def wrap_frame(frame): fid = id(frame) frametable[fid] = frame return fid def wrap_info(info): "replace info[2], a traceback instance, by its ID" if info is None: return None else: traceback = info[2] assert isinstance(traceback, types.TracebackType) traceback_id = id(traceback) tracebacktable[traceback_id] = traceback modified_info = (info[0], info[1], traceback_id) return modified_info class GUIProxy: def __init__(self, conn, gui_adap_oid): self.conn = conn self.oid = gui_adap_oid def interaction(self, message, frame, info=None): self.conn.remotecall(self.oid, "interaction", (message, wrap_frame(frame), wrap_info(info)), {}) class IdbAdapter: def __init__(self, idb): self.idb = idb def set_step(self): self.idb.set_step() def set_quit(self): self.idb.set_quit() def set_continue(self): self.idb.set_continue() def set_next(self, fid): frame = frametable[fid] self.idb.set_next(frame) def set_return(self, fid): frame = frametable[fid] self.idb.set_return(frame) def get_stack(self, fid, tbid): frame = frametable[fid] if tbid is None: tb = None else: tb = tracebacktable[tbid] stack, i = self.idb.get_stack(frame, tb) stack = [(wrap_frame(frame2), k) for frame2, k in stack] return stack, i def run(self, cmd): import __main__ self.idb.run(cmd, __main__.__dict__) def set_break(self, filename, lineno): msg = self.idb.set_break(filename, lineno) return msg def clear_break(self, filename, lineno): msg = self.idb.clear_break(filename, lineno) return msg def clear_all_file_breaks(self, filename): msg = self.idb.clear_all_file_breaks(filename) return msg def frame_attr(self, fid, name): frame = frametable[fid] return getattr(frame, name) def frame_globals(self, fid): frame = frametable[fid] dict = frame.f_globals did = id(dict) dicttable[did] = dict return did def frame_locals(self, fid): frame = frametable[fid] dict = frame.f_locals did = id(dict) dicttable[did] = dict return did def frame_code(self, fid): frame = frametable[fid] code = frame.f_code cid = id(code) codetable[cid] = code return cid def code_name(self, cid): code = codetable[cid] return code.co_name def code_filename(self, cid): code = codetable[cid] return code.co_filename def dict_keys(self, did): raise NotImplementedError("dict_keys not public or pickleable") def dict_keys_list(self, did): dict = dicttable[did] return list(dict.keys()) def dict_item(self, did, key): dict = dicttable[did] value = dict[key] value = reprlib.repr(value) return value def start_debugger(rpchandler, gui_adap_oid): gui_proxy = GUIProxy(rpchandler, gui_adap_oid) idb = debugger.Idb(gui_proxy) idb_adap = IdbAdapter(idb) rpchandler.register(idb_adap_oid, idb_adap) return idb_adap_oid class FrameProxy: def __init__(self, conn, fid): self._conn = conn self._fid = fid self._oid = "idb_adapter" self._dictcache = {} def __getattr__(self, name): if name[:1] == "_": raise AttributeError(name) if name == "f_code": return self._get_f_code() if name == "f_globals": return self._get_f_globals() if name == "f_locals": return self._get_f_locals() return self._conn.remotecall(self._oid, "frame_attr", (self._fid, name), {}) def _get_f_code(self): cid = self._conn.remotecall(self._oid, "frame_code", (self._fid,), {}) return CodeProxy(self._conn, self._oid, cid) def _get_f_globals(self): did = self._conn.remotecall(self._oid, "frame_globals", (self._fid,), {}) return self._get_dict_proxy(did) def _get_f_locals(self): did = self._conn.remotecall(self._oid, "frame_locals", (self._fid,), {}) return self._get_dict_proxy(did) def _get_dict_proxy(self, did): if did in self._dictcache: return self._dictcache[did] dp = DictProxy(self._conn, self._oid, did) self._dictcache[did] = dp return dp class CodeProxy: def __init__(self, conn, oid, cid): self._conn = conn self._oid = oid self._cid = cid def __getattr__(self, name): if name == "co_name": return self._conn.remotecall(self._oid, "code_name", (self._cid,), {}) if name == "co_filename": return self._conn.remotecall(self._oid, "code_filename", (self._cid,), {}) class DictProxy: def __init__(self, conn, oid, did): self._conn = conn self._oid = oid self._did = did def keys(self): return self._conn.remotecall(self._oid, "dict_keys_list", (self._did,), {}) def __getitem__(self, key): return self._conn.remotecall(self._oid, "dict_item", (self._did, key), {}) def __getattr__(self, name): raise AttributeError(name) class GUIAdapter: def __init__(self, conn, gui): self.conn = conn self.gui = gui def interaction(self, message, fid, modified_info): frame = FrameProxy(self.conn, fid) self.gui.interaction(message, frame, modified_info) class IdbProxy: def __init__(self, conn, shell, oid): self.oid = oid self.conn = conn self.shell = shell def call(self, methodname, /, *args, **kwargs): value = self.conn.remotecall(self.oid, methodname, args, kwargs) return value def run(self, cmd, locals): seq = self.conn.asyncqueue(self.oid, "run", (cmd,), {}) self.shell.interp.active_seq = seq def get_stack(self, frame, tbid): stack, i = self.call("get_stack", frame._fid, tbid) stack = [(FrameProxy(self.conn, fid), k) for fid, k in stack] return stack, i def set_continue(self): self.call("set_continue") def set_step(self): self.call("set_step") def set_next(self, frame): self.call("set_next", frame._fid) def set_return(self, frame): self.call("set_return", frame._fid) def set_quit(self): self.call("set_quit") def set_break(self, filename, lineno): msg = self.call("set_break", filename, lineno) return msg def clear_break(self, filename, lineno): msg = self.call("clear_break", filename, lineno) return msg def clear_all_file_breaks(self, filename): msg = self.call("clear_all_file_breaks", filename) return msg def start_remote_debugger(rpcclt, pyshell): global idb_adap_oid idb_adap_oid = rpcclt.remotecall("exec", "start_the_debugger",\ (gui_adap_oid,), {}) idb_proxy = IdbProxy(rpcclt, pyshell, idb_adap_oid) gui = debugger.Debugger(pyshell, idb_proxy) gui_adap = GUIAdapter(rpcclt, gui) rpcclt.register(gui_adap_oid, gui_adap) return gui def close_remote_debugger(rpcclt): close_subprocess_debugger(rpcclt) rpcclt.unregister(gui_adap_oid) def close_subprocess_debugger(rpcclt): rpcclt.remotecall("exec", "stop_the_debugger", (idb_adap_oid,), {}) def restart_subprocess_debugger(rpcclt): idb_adap_oid_ret = rpcclt.remotecall("exec", "start_the_debugger",\ (gui_adap_oid,), {}) assert idb_adap_oid_ret == idb_adap_oid, 'Idb restarted with different oid' if __name__ == "__main__": from unittest import main main('idlelib.idle_test.test_debugger_r', verbosity=2, exit=False)
xxx to do popup menu support partial or total redisplay more doc strings tooltips object browser xxx to do for classesmodules add open source to object browser todo return sortedself object xxx to do popup menu support partial or total redisplay more doc strings tooltips object browser xxx to do for classes modules add open source to object browser todo return sorted self object htest
from reprlib import Repr from idlelib.tree import TreeItem, TreeNode, ScrolledCanvas myrepr = Repr() myrepr.maxstring = 100 myrepr.maxother = 100 class ObjectTreeItem(TreeItem): def __init__(self, labeltext, object, setfunction=None): self.labeltext = labeltext self.object = object self.setfunction = setfunction def GetLabelText(self): return self.labeltext def GetText(self): return myrepr.repr(self.object) def GetIconName(self): if not self.IsExpandable(): return "python" def IsEditable(self): return self.setfunction is not None def SetText(self, text): try: value = eval(text) self.setfunction(value) except: pass else: self.object = value def IsExpandable(self): return not not dir(self.object) def GetSubList(self): keys = dir(self.object) sublist = [] for key in keys: try: value = getattr(self.object, key) except AttributeError: continue item = make_objecttreeitem( str(key) + " =", value, lambda value, key=key, object=self.object: setattr(object, key, value)) sublist.append(item) return sublist class ClassTreeItem(ObjectTreeItem): def IsExpandable(self): return True def GetSubList(self): sublist = ObjectTreeItem.GetSubList(self) if len(self.object.__bases__) == 1: item = make_objecttreeitem("__bases__[0] =", self.object.__bases__[0]) else: item = make_objecttreeitem("__bases__ =", self.object.__bases__) sublist.insert(0, item) return sublist class AtomicObjectTreeItem(ObjectTreeItem): def IsExpandable(self): return False class SequenceTreeItem(ObjectTreeItem): def IsExpandable(self): return len(self.object) > 0 def keys(self): return range(len(self.object)) def GetSubList(self): sublist = [] for key in self.keys(): try: value = self.object[key] except KeyError: continue def setfunction(value, key=key, object=self.object): object[key] = value item = make_objecttreeitem(f"{key!r}:", value, setfunction) sublist.append(item) return sublist class DictTreeItem(SequenceTreeItem): def keys(self): keys = list(self.object) try: keys.sort() except: pass return keys dispatch = { int: AtomicObjectTreeItem, float: AtomicObjectTreeItem, str: AtomicObjectTreeItem, tuple: SequenceTreeItem, list: SequenceTreeItem, dict: DictTreeItem, type: ClassTreeItem, } def make_objecttreeitem(labeltext, object, setfunction=None): t = type(object) if t in dispatch: c = dispatch[t] else: c = ObjectTreeItem return c(labeltext, object, setfunction) def _debug_object_browser(parent): import sys from tkinter import Toplevel top = Toplevel(parent) top.title("Test debug object browser") x, y = map(int, parent.geometry().split('+')[1:]) top.geometry("+%d+%d" % (x + 100, y + 175)) top.configure(bd=0, bg="yellow") top.focus_set() sc = ScrolledCanvas(top, bg="white", highlightthickness=0, takefocus=1) sc.frame.pack(expand=1, fill="both") item = make_objecttreeitem("sys", sys) node = TreeNode(sc.canvas, None, item) node.update() if __name__ == '__main__': from unittest import main main('idlelib.idle_test.test_debugobj', verbosity=2, exit=False) from idlelib.idle_test.htest import run run(_debug_object_browser)
lives in python subprocess lives in idle process lives in python subprocess lives in idle process
from idlelib import rpc def remote_object_tree_item(item): wrapper = WrappedObjectTreeItem(item) oid = id(wrapper) rpc.objecttable[oid] = wrapper return oid class WrappedObjectTreeItem: def __init__(self, item): self.__item = item def __getattr__(self, name): value = getattr(self.__item, name) return value def _GetSubList(self): sub_list = self.__item._GetSubList() return list(map(remote_object_tree_item, sub_list)) class StubObjectTreeItem: def __init__(self, sockio, oid): self.sockio = sockio self.oid = oid def __getattr__(self, name): value = rpc.MethodProxy(self.sockio, self.oid, name) return value def _GetSubList(self): sub_list = self.sockio.remotecall(self.oid, "_GetSubList", (), {}) return [StubObjectTreeItem(self.sockio, oid) for oid in sub_list] if __name__ == '__main__': from unittest import main main('idlelib.idle_test.test_debugobj_r', verbosity=2)
cache is used to only remove added attributes when changing the delegate function is really about resetting delegator dict to original state cache is just a means cache is used to only remove added attributes when changing the delegate may raise attributeerror function is really about resetting delegator dict to original state cache is just a means
class Delegator: def __init__(self, delegate=None): self.delegate = delegate self.__cache = set() def __getattr__(self, name): attr = getattr(self.delegate, name) setattr(self, name, attr) self.__cache.add(name) return attr def resetcache(self): "Removes added attributes while leaving original attributes." for key in self.__cache: try: delattr(self, key) except AttributeError: pass self.__cache.clear() def setdelegate(self, delegate): "Reset attributes and change delegate." self.resetcache() self.delegate = delegate if __name__ == '__main__': from unittest import main main('idlelib.idle_test.test_delegator', verbosity=2)
optionmenu widget modified to allow dynamic menu reconfiguration and setting of highlightthickness add setmenu and highlightthickness to optionmenu highlightthickness adds space around menu button clear and reload the menu with a new set of options valuelist list of new options value initial value to set the optionmenu s menubutton to only module without unittests because of intention to replace add setmenu and highlightthickness to optionmenu highlightthickness adds space around menu button clear and reload the menu with a new set of options valuelist list of new options value initial value to set the optionmenu s menubutton to htest stringvar button set the default value only module without unittests because of intention to replace
from tkinter import OptionMenu, _setit, StringVar, Button class DynOptionMenu(OptionMenu): def __init__(self, master, variable, value, *values, **kwargs): highlightthickness = kwargs.pop('highlightthickness', None) OptionMenu.__init__(self, master, variable, value, *values, **kwargs) self['highlightthickness'] = highlightthickness self.variable = variable self.command = kwargs.get('command') def SetMenu(self,valueList,value=None): self['menu'].delete(0,'end') for item in valueList: self['menu'].add_command(label=item, command=_setit(self.variable,item,self.command)) if value: self.variable.set(value) def _dyn_option_menu(parent): from tkinter import Toplevel top = Toplevel(parent) top.title("Test dynamic option menu") x, y = map(int, parent.geometry().split('+')[1:]) top.geometry("200x100+%d+%d" % (x + 250, y + 175)) top.focus_set() var = StringVar(top) var.set("Old option set") dyn = DynOptionMenu(top, var, "old1","old2","old3","old4", highlightthickness=5) dyn.pack() def update(): dyn.SetMenu(["new1","new2","new3","new4"], value="new option set") button = Button(top, text="Change option set", command=update) button.pack() if __name__ == '__main__': from idlelib.idle_test.htest import run run(_dyn_option_menu)
n b this import overridden in pyshellfilelist this can happen when bad filename is passed on command line don t create window perform action e g open in same window test n b this import overridden in pyshellfilelist for editorwindow getrawvar shared tcl variables this can happen when bad filename is passed on command line don t create window perform action e g open in same window todo check and convert to htest _test
"idlelib.filelist" import os from tkinter import messagebox class FileList: from idlelib.editor import EditorWindow def __init__(self, root): self.root = root self.dict = {} self.inversedict = {} self.vars = {} def open(self, filename, action=None): assert filename filename = self.canonize(filename) if os.path.isdir(filename): messagebox.showerror( "File Error", f"{filename!r} is a directory.", master=self.root) return None key = os.path.normcase(filename) if key in self.dict: edit = self.dict[key] edit.top.wakeup() return edit if action: return action(filename) else: edit = self.EditorWindow(self, filename, key) if edit.good_load: return edit else: edit._close() return None def gotofileline(self, filename, lineno=None): edit = self.open(filename) if edit is not None and lineno is not None: edit.gotoline(lineno) def new(self, filename=None): return self.EditorWindow(self, filename) def close_all_callback(self, *args, **kwds): for edit in list(self.inversedict): reply = edit.close() if reply == "cancel": break return "break" def unregister_maybe_terminate(self, edit): try: key = self.inversedict[edit] except KeyError: print("Don't know this EditorWindow object. (close)") return if key: del self.dict[key] del self.inversedict[edit] if not self.inversedict: self.root.quit() def filename_changed_edit(self, edit): edit.saved_change_hook() try: key = self.inversedict[edit] except KeyError: print("Don't know this EditorWindow object. (rename)") return filename = edit.io.filename if not filename: if key: del self.dict[key] self.inversedict[edit] = None return filename = self.canonize(filename) newkey = os.path.normcase(filename) if newkey == key: return if newkey in self.dict: conflict = self.dict[newkey] self.inversedict[conflict] = None messagebox.showerror( "Name Conflict", f"You now have multiple edit windows open for {filename!r}", master=self.root) self.dict[newkey] = edit self.inversedict[edit] = newkey if key: try: del self.dict[key] except KeyError: pass def canonize(self, filename): if not os.path.isabs(filename): try: pwd = os.getcwd() except OSError: pass else: filename = os.path.join(pwd, filename) return os.path.normpath(filename) def _test(): from tkinter import Tk from idlelib.editor import fixwordbreaks from idlelib.run import fix_scaling root = Tk() fix_scaling(root) fixwordbreaks(root) root.withdraw() flist = FileList(root) flist.new() if flist.inversedict: root.mainloop() if __name__ == '__main__': from unittest import main main('idlelib.idle_test.test_filelist', verbosity=2)
grep dialog for find in files functionality inherits from searchdialogbase for gui and uses searchengine to prepare search pattern importing outputwindow here fails due to import loop editorwindow grepdialog outputwindow editorwindow open the find in files dialog modulelevel function to access the singleton grepdialog instance and open the dialog if text is selected it is used as the search phrase otherwise the previous entry is used args text text widget that contains the selected text for default search phrase io iomenu iobinding instance with default path to search flist filelist filelist instance for outputwindow parent generate file names in dir that match pattern args folder root directory to search pattern file pattern to match recursive true to include subdirectories create search dialog for searching for a phrase in the file system uses searchdialogbase as the basis for the gui and a searchengine instance to prepare the search attributes flist filelist filelist instance for outputwindow parent globvar string value of entry widget for path to search globent entry widget for globvar created in createentries recvar boolean value of checkbutton widget for traversing through subdirectories make dialog visible on top of others and ready to use extend the searchdialogbase open to set the initial value for globvar args text multicall object containing the text information searchphrase string phrase to search io iomenu iobinding instance containing file path grep for search pattern in file path the default command is bound to return if entry values are populated set outputwindow as stdout and perform search the search dialog is closed automatically when the search begins search for prog within the lines of the files in path for the each file in the path directory open the file and search each line for the matching pattern if the pattern is found write the file and line information to stdout which is an outputwindow args prog the compiled cooked search pattern path string containing the search path tk window has been closed outputwindow text none so in ow write ow text insert fails frame imported in base importing outputwindow here fails due to import loop editorwindow grepdialog outputwindow editorwindow open the find in files dialog module level function to access the singleton grepdialog instance and open the dialog if text is selected it is used as the search phrase otherwise the previous entry is used args text text widget that contains the selected text for default search phrase io iomenu iobinding instance with default path to search flist filelist filelist instance for outputwindow parent generate file names in dir that match pattern args folder root directory to search pattern file pattern to match recursive true to include subdirectories create search dialog for searching for a phrase in the file system uses searchdialogbase as the basis for the gui and a searchengine instance to prepare the search attributes flist filelist filelist instance for outputwindow parent globvar string value of entry widget for path to search globent entry widget for globvar created in create_entries recvar boolean value of checkbutton widget for traversing through subdirectories make dialog visible on top of others and ready to use extend the searchdialogbase open to set the initial value for globvar args text multicall object containing the text information searchphrase string phrase to search io iomenu iobinding instance containing file path grep for search pattern in file path the default command is bound to return if entry values are populated set outputwindow as stdout and perform search the search dialog is closed automatically when the search begins leave here search for prog within the lines of the files in path for the each file in the path directory open the file and search each line for the matching pattern if the pattern is found write the file and line information to stdout which is an outputwindow args prog the compiled cooked search pattern path string containing the search path tk window has been closed outputwindow text none so in ow write ow text insert fails htest
import fnmatch import os import sys from tkinter import StringVar, BooleanVar from tkinter.ttk import Checkbutton from idlelib.searchbase import SearchDialogBase from idlelib import searchengine def grep(text, io=None, flist=None): root = text._root() engine = searchengine.get(root) if not hasattr(engine, "_grepdialog"): engine._grepdialog = GrepDialog(root, engine, flist) dialog = engine._grepdialog searchphrase = text.get("sel.first", "sel.last") dialog.open(text, searchphrase, io) def walk_error(msg): "Handle os.walk error." print(msg) def findfiles(folder, pattern, recursive): for dirpath, _, filenames in os.walk(folder, onerror=walk_error): yield from (os.path.join(dirpath, name) for name in filenames if fnmatch.fnmatch(name, pattern)) if not recursive: break class GrepDialog(SearchDialogBase): "Dialog for searching multiple files." title = "Find in Files Dialog" icon = "Grep" needwrapbutton = 0 def __init__(self, root, engine, flist): super().__init__(root, engine) self.flist = flist self.globvar = StringVar(root) self.recvar = BooleanVar(root) def open(self, text, searchphrase, io=None): SearchDialogBase.open(self, text, searchphrase) if io: path = io.filename or "" else: path = "" dir, base = os.path.split(path) head, tail = os.path.splitext(base) if not tail: tail = ".py" self.globvar.set(os.path.join(dir, "*" + tail)) def create_entries(self): "Create base entry widgets and add widget for search path." SearchDialogBase.create_entries(self) self.globent = self.make_entry("In files:", self.globvar)[0] def create_other_buttons(self): "Add check button to recurse down subdirectories." btn = Checkbutton( self.make_frame()[0], variable=self.recvar, text="Recurse down subdirectories") btn.pack(side="top", fill="both") def create_command_buttons(self): "Create base command buttons and add button for Search Files." SearchDialogBase.create_command_buttons(self) self.make_button("Search Files", self.default_command, isdef=True) def default_command(self, event=None): prog = self.engine.getprog() if not prog: return path = self.globvar.get() if not path: self.top.bell() return from idlelib.outwin import OutputWindow save = sys.stdout try: sys.stdout = OutputWindow(self.flist) self.grep_it(prog, path) finally: sys.stdout = save def grep_it(self, prog, path): folder, filepat = os.path.split(path) if not folder: folder = os.curdir filelist = sorted(findfiles(folder, filepat, self.recvar.get())) self.close() pat = self.engine.getpat() print(f"Searching {pat!r} in {path} ...") hits = 0 try: for fn in filelist: try: with open(fn, errors='replace') as f: for lineno, line in enumerate(f, 1): if line[-1:] == '\n': line = line[:-1] if prog.search(line): sys.stdout.write(f"{fn}: {lineno}: {line}\n") hits += 1 except OSError as msg: print(msg) print(f"Hits found: {hits}\n(Hint: right-click to open locations.)" if hits else "No hits.") except AttributeError: pass def _grep_dialog(parent): from tkinter import Toplevel, Text, SEL, END from tkinter.ttk import Frame, Button from idlelib.pyshell import PyShellFileList top = Toplevel(parent) top.title("Test GrepDialog") x, y = map(int, parent.geometry().split('+')[1:]) top.geometry(f"+{x}+{y + 175}") flist = PyShellFileList(top) frame = Frame(top) frame.pack() text = Text(frame, height=5) text.pack() def show_grep_dialog(): text.tag_add(SEL, "1.0", END) grep(text, flist=flist) text.tag_remove(SEL, "1.0", END) button = Button(frame, text="Show GrepDialog", command=show_grep_dialog) button.pack() if __name__ == "__main__": from unittest import main main('idlelib.idle_test.test_grep', verbosity=2, exit=False) from idlelib.idle_test.htest import run run(_grep_dialog)
implement idle shell history mechanism store store source statement called from pyshell resetoutput fetch fetch stored statement matching prefix already entered historynext bound to historynext event default altn historyprev bound to historyprev event default altp initialize data attributes and bind event methods text idle wrapper of tk text widget with bell history source statements possibly with multiple lines prefix source already entered at prompt filters history list pointer index into history cyclic wrap around history list or not fetch statement and replace current line in text widget set prefix and pointer as needed for successive fetches reset them to none none when returning to the start line sound bell when return to start line or cannot leave a line because cyclic is false avoid duplicates implement idle shell history mechanism store store source statement called from pyshell resetoutput fetch fetch stored statement matching prefix already entered history_next bound to history next event default alt n history_prev bound to history prev event default alt p initialize data attributes and bind event methods text idle wrapper of tk text widget with bell history source statements possibly with multiple lines prefix source already entered at prompt filters history list pointer index into history cyclic wrap around history list or not fetch statement and replace current line in text widget set prefix and pointer as needed for successive fetches reset them to none none when returning to the start line sound bell when return to start line or cannot leave a line because cyclic is false after cursor move will be decremented will be incremented abort history_next abort history_prev avoid duplicates
"Implement Idle Shell history mechanism with History class" from idlelib.config import idleConf class History: def __init__(self, text): self.text = text self.history = [] self.prefix = None self.pointer = None self.cyclic = idleConf.GetOption("main", "History", "cyclic", 1, "bool") text.bind("<<history-previous>>", self.history_prev) text.bind("<<history-next>>", self.history_next) def history_next(self, event): "Fetch later statement; start with earliest if cyclic." self.fetch(reverse=False) return "break" def history_prev(self, event): "Fetch earlier statement; start with most recent." self.fetch(reverse=True) return "break" def fetch(self, reverse): nhist = len(self.history) pointer = self.pointer prefix = self.prefix if pointer is not None and prefix is not None: if self.text.compare("insert", "!=", "end-1c") or \ self.text.get("iomark", "end-1c") != self.history[pointer]: pointer = prefix = None self.text.mark_set("insert", "end-1c") if pointer is None or prefix is None: prefix = self.text.get("iomark", "end-1c") if reverse: pointer = nhist else: if self.cyclic: pointer = -1 else: self.text.bell() return nprefix = len(prefix) while True: pointer += -1 if reverse else 1 if pointer < 0 or pointer >= nhist: self.text.bell() if not self.cyclic and pointer < 0: return else: if self.text.get("iomark", "end-1c") != prefix: self.text.delete("iomark", "end-1c") self.text.insert("iomark", prefix, "stdin") pointer = prefix = None break item = self.history[pointer] if item[:nprefix] == prefix and len(item) > nprefix: self.text.delete("iomark", "end-1c") self.text.insert("iomark", item, "stdin") break self.text.see("insert") self.text.tag_remove("sel", "1.0", "end") self.pointer = pointer self.prefix = prefix def store(self, source): "Store Shell input statement into history list." source = source.strip() if len(source) > 2: try: self.history.remove(source) except ValueError: pass self.history.append(source) self.pointer = None self.prefix = None if __name__ == "__main__": from unittest import main main('idlelib.idle_test.test_history', verbosity=2, exit=False)