404

[ Avaa Bypassed ]




Upload:

Command:

elspacio@18.116.88.123: ~ $
3

��XNy�@sdZddlmZddlZddlZddlZddlmZmZddl	m
Z
ddlmZm
Z
mZmZddlmZmZmZmZmZmZmZmZmZddlmZd	d
ddd
dddddddgZd=d>d?d@dAgZedd ��ZGd!d"�d"e �Z!ee!�Gd#d	�d	e"��Z#Gd$d�de#�Z$Gd%d�de%�Z&Gd&d'�d'e"�Z'e'�Z(Gd(d)�d)e)�Z*Gd*d+�d+e"�Z+d,d�Z,Gd-d.�d.e"�Z-e-�Z.d/d�Z/Gd0d�d�Z0Gd1d�de�Z1Gd2d3�d3e!�Z2ee2�Gd4d
�d
e#��Z3Gd5d
�d
e"�Z4Gd6d�de3�Z5d7d8�Z6Gd9d:�d:e2�Z7ee7�Gd;d<�d<e3��Z8dS)Bz�
    pygments.lexer
    ~~~~~~~~~~~~~~

    Base lexer classes.

    :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
�)�print_functionN)�
apply_filters�Filter)�get_filter_by_name)�Error�Text�Other�
_TokenType)	�get_bool_opt�get_int_opt�get_list_opt�make_analysator�	text_type�
add_metaclass�	iteritems�Future�guess_decode)�	regex_opt�Lexer�
RegexLexer�ExtendedRegexLexer�DelegatingLexer�LexerContext�include�inherit�bygroups�using�this�default�words��utf-8����utf-32����utf-32be����utf-16����utf-16becCsdS)Ng�)�xr*r*�/usr/lib/python3.6/lexer.py�<lambda>$sr-c@seZdZdZdd�ZdS)�	LexerMetaz�
    This metaclass automagically converts ``analyse_text`` methods into
    static methods which always return float values.
    cCs(d|krt|d�|d<tj||||�S)N�analyse_text)r
�type�__new__)Zmcs�name�bases�dr*r*r,r1-szLexerMeta.__new__N)�__name__�
__module__�__qualname__�__doc__r1r*r*r*r,r.'sr.c@sZeZdZdZdZgZgZgZgZdZ	dd�Z
dd�Zdd	�Zd
d�Z
dd
d�Zdd�ZdS)ra�
    Lexer for a specific language.

    Basic options recognized:
    ``stripnl``
        Strip leading and trailing newlines from the input (default: True).
    ``stripall``
        Strip all leading and trailing whitespace from the input
        (default: False).
    ``ensurenl``
        Make sure that the input ends with a newline (default: True).  This
        is required for some lexers that consume input linewise.

        .. versionadded:: 1.3

    ``tabsize``
        If given and greater than 0, expand tabs in the input (default: 0).
    ``encoding``
        If given, must be an encoding name. This encoding will be used to
        convert the input string to Unicode, if it is not already a Unicode
        string (default: ``'guess'``, which uses a simple UTF-8 / Locale /
        Latin1 detection.  Can also be ``'chardet'`` to use the chardet
        library, if it is installed.
    ``inencoding``
        Overrides the ``encoding`` if given.
    NrcKs�||_t|dd�|_t|dd�|_t|dd�|_t|dd�|_|jdd	�|_|jd
�pZ|j|_g|_	xt
|df�D]}|j|�qrWdS)N�stripnlT�stripallF�ensurenl�tabsizer�encoding�guessZ
inencoding�filters)�optionsr
r9r:r;rr<�getr=r?r�
add_filter)�selfr@�filter_r*r*r,�__init__bszLexer.__init__cCs(|jrd|jj|jfSd|jjSdS)Nz<pygments.lexers.%s with %r>z<pygments.lexers.%s>)r@�	__class__r5)rCr*r*r,�__repr__ns
zLexer.__repr__cKs&t|t�st|f|�}|jj|�dS)z8
        Add a new stream filter to this lexer.
        N)�
isinstancerrr?�append)rCrDr@r*r*r,rBus
zLexer.add_filtercCsdS)a~
        Has to return a float between ``0`` and ``1`` that indicates
        if a lexer wants to highlight this text. Used by ``guess_lexer``.
        If this method returns ``0`` it won't highlight it in any case, if
        it returns ``1`` highlighting with this lexer is guaranteed.

        The `LexerMeta` metaclass automatically wraps this function so
        that it works like a static method (no ``self`` or ``cls``
        parameter) and the return value is automatically converted to
        `float`. If the return value is an object that is boolean `False`
        it's the same as if the return values was ``0.0``.
        Nr*)�textr*r*r,r/}szLexer.analyse_textFcs�t�t�s�jdkr"t��\�}nʈjdkr�yddl}Wntk
rTtd��YnXd}x4tD],\}}�j|�r`�t|�d�j	|d�}Pq`W|dkr�|j
�dd��}�j	|jd�p�d	d�}|�n&�j	�j���jd
�r�td
�d��n�jd
��r
�td
�d���jdd���jd
d���j
�r4�j��n�j�rF�jd���jdk�r^�j�j���j�r|�jd��r|�d7���fdd�}	|	�}
|�s�t|
�j��}
|
S)a=
        Return an iterable of (tokentype, value) pairs generated from
        `text`. If `unfiltered` is set to `True`, the filtering mechanism
        is bypassed even if filters are defined.

        Also preprocess the text, i.e. expand tabs and strip it if
        wanted and applies registered filters.
        r>�chardetrNzkTo enable chardet encoding guessing, please install the chardet library from http://chardet.feedparser.org/�replaceir=zutf-8uz
�
�
c3s(x"�j��D]\}}}||fVqWdS)N)�get_tokens_unprocessed)�_�t�v)rCrJr*r,�streamer�sz"Lexer.get_tokens.<locals>.streamer)rHrr=rrK�ImportError�
_encoding_map�
startswith�len�decodeZdetectrArLr:�stripr9r<�
expandtabsr;�endswithrr?)rCrJZ
unfilteredrPrKZdecodedZbomr=�encrS�streamr*)rCrJr,�
get_tokens�sL	






zLexer.get_tokenscCst�dS)z�
        Return an iterable of (index, tokentype, value) pairs where "index"
        is the starting position of the token within the input text.

        In subclasses, implement this method as a generator to
        maximize effectiveness.
        N)�NotImplementedError)rCrJr*r*r,rO�szLexer.get_tokens_unprocessed)F)r5r6r7r8r2�aliases�	filenamesZalias_filenamesZ	mimetypesZpriorityrErGrBr/r^rOr*r*r*r,r3s
;c@s$eZdZdZefdd�Zdd�ZdS)ra 
    This lexer takes two lexer as arguments. A root lexer and
    a language lexer. First everything is scanned using the language
    lexer, afterwards all ``Other`` tokens are lexed using the root
    lexer.

    The lexers from the ``template`` lexer package use this base lexer.
    cKs0|f|�|_|f|�|_||_tj|f|�dS)N)�
root_lexer�language_lexer�needlerrE)rCZ_root_lexerZ_language_lexerZ_needler@r*r*r,rE�szDelegatingLexer.__init__cCs�d}g}g}xX|jj|�D]H\}}}||jkrR|rH|jt|�|f�g}||7}q|j|||f�qW|r||jt|�|f�t||jj|��S)N�)rcrOrdrIrW�
do_insertionsrb)rCrJZbuffered�
insertionsZ
lng_buffer�irQrRr*r*r,rO�s

z&DelegatingLexer.get_tokens_unprocessedN)r5r6r7r8rrErOr*r*r*r,r�sc@seZdZdZdS)rzI
    Indicates that a state should include rules from another state.
    N)r5r6r7r8r*r*r*r,r�sc@seZdZdZdd�ZdS)�_inheritzC
    Indicates the a state should inherit from its superclass.
    cCsdS)Nrr*)rCr*r*r,rGsz_inherit.__repr__N)r5r6r7r8rGr*r*r*r,ri�sric@s eZdZdZdd�Zdd�ZdS)�combinedz:
    Indicates a state combined from multiple states.
    cGstj||�S)N)�tupler1)�cls�argsr*r*r,r1szcombined.__new__cGsdS)Nr*)rCrmr*r*r,rEszcombined.__init__N)r5r6r7r8r1rEr*r*r*r,rj	srjc@sFeZdZdZdd�Zddd�Zddd�Zdd	d
�Zdd�Zd
d�Z	dS)�_PseudoMatchz:
    A pseudo match object constructed from a string.
    cCs||_||_dS)N)�_text�_start)rC�startrJr*r*r,rEsz_PseudoMatch.__init__NcCs|jS)N)rp)rC�argr*r*r,rqsz_PseudoMatch.startcCs|jt|j�S)N)rprWro)rCrrr*r*r,�end"sz_PseudoMatch.endcCs|rtd��|jS)Nz
No such group)�
IndexErrorro)rCrrr*r*r,�group%sz_PseudoMatch.groupcCs|jfS)N)ro)rCr*r*r,�groups*sz_PseudoMatch.groupscCsiS)Nr*)rCr*r*r,�	groupdict-sz_PseudoMatch.groupdict)N)N)N)
r5r6r7r8rErqrsrurvrwr*r*r*r,rns


rncsd�fdd�	}|S)zL
    Callback that yields multiple actions for each group in the match.
    Nc3s�x�t��D]�\}}|dkrq
q
t|�tkrT|j|d�}|r�|j|d�||fVq
|j|d�}|dk	r
|r~|j|d�|_x.||t|j|d�|�|�D]}|r�|Vq�Wq
W|r�|j�|_dS)N�)�	enumerater0r	rurq�posrnrs)�lexer�match�ctxrh�action�data�item)rmr*r,�callback5s"zbygroups.<locals>.callback)Nr*)rmr�r*)rmr,r1sc@seZdZdZdS)�_ThiszX
    Special singleton used for indicating the caller class.
    Used by ``using``.
    N)r5r6r7r8r*r*r*r,r�Ksr�csji�d�kr:�jd�}t|ttf�r.|�d<nd|f�d<�tkrTd��fdd�	}nd	���fdd�	}|S)
a�
    Callback that processes the match with a different lexer.

    The keyword arguments are forwarded to the lexer, except `state` which
    is handled separately.

    `state` specifies the state that the new lexer will start in, and can
    be an enumerable such as ('root', 'inline', 'string') or a simple
    string which is assumed to be on top of the root state.

    Note: For that to work, `_other` must not be an `ExtendedRegexLexer`.
    �state�stack�rootNc3sn�r�j|j�|jf��}n|}|j�}x0|j|j�f��D]\}}}||||fVq>W|rj|j�|_dS)N)�updater@rFrqrOrursrz)r{r|r}�lx�srhrQrR)�	gt_kwargs�kwargsr*r,r�iszusing.<locals>.callbackc3sb�j|j��f��}|j�}x0|j|j�f��D]\}}}||||fVq2W|r^|j�|_dS)N)r�r@rqrOrursrz)r{r|r}r�r�rhrQrR)�_otherr�r�r*r,r�xs
)N)N)�poprH�listrkr)r�r�r�r�r*)r�r�r�r,rSs



c@seZdZdZdd�ZdS)rz�
    Indicates a state or state action (e.g. #pop) to apply.
    For example default('#pop') is equivalent to ('', Token, '#pop')
    Note that state tuples may be used as well.

    .. versionadded:: 2.0
    cCs
||_dS)N)r�)rCr�r*r*r,rE�szdefault.__init__N)r5r6r7r8rEr*r*r*r,r�sc@s"eZdZdZddd�Zdd�ZdS)	rz�
    Indicates a list of literal words that is transformed into an optimized
    regex that matches any of the words.

    .. versionadded:: 2.0
    recCs||_||_||_dS)N)r�prefix�suffix)rCrr�r�r*r*r,rE�szwords.__init__cCst|j|j|jd�S)N)r�r�)rrr�r�)rCr*r*r,rA�sz	words.getN)rere)r5r6r7r8rErAr*r*r*r,r�s
c@sJeZdZdZdd�Zdd�Zdd�Zdd	�Zddd�Zd
d�Z	dd�Z
d
S)�RegexLexerMetazw
    Metaclass for RegexLexer, creates the self._tokens attribute from
    self.tokens on the first instantiation.
    cCs t|t�r|j�}tj||�jS)zBPreprocess the regular expression component of a token definition.)rHrrA�re�compiler|)rl�regex�rflagsr�r*r*r,�_process_regex�s
zRegexLexerMeta._process_regexcCs|S)z5Preprocess the token component of a token definition.r*)rl�tokenr*r*r,�_process_token�szRegexLexerMeta._process_tokencCs�t|t�rT|dkrdS||kr$|fS|dkr0|S|dd�dkr�t|dd��Snrt|t�r�d|j}|jd7_g}x |D]}|j|j|||��q�W|||<|fSt|t�r�x|D]}q�W|SdS)	z=Preprocess the state transition action of a token definition.z#poprxz#pushN�z#pop:z_tmp_%d���)rH�str�intrj�_tmpname�extend�_process_staterk)rl�	new_state�unprocessed�	processedZ	tmp_state�itokensZistater*r*r,�_process_new_state�s0






z!RegexLexerMeta._process_new_statecCs4||kr||Sg}||<|j}�x
||D]�}t|t�rV|j|j||t|���q.t|t�rbq.t|t�r�|j|j	||�}|j
tjd�j
d|f�q.y|j|d||�}Wn:tk
r�}	ztd|d|||	f��WYdd}	~	XnX|j|d�}
t|�dk�r
d}n|j|d||�}|j
||
|f�q.W|S)z%Preprocess a single state definition.reNrz+uncompilable regex %r in state %r of %r: %srx�)�flagsrHrr�r�r�rirr�r�rIr�r�r|r��	Exception�
ValueErrorr�rW)rlr�r�r��tokensr�Ztdefr��rex�errr�r*r*r,r��s6


&
zRegexLexerMeta._process_stateNcCs@i}|j|<|p|j|}xt|�D]}|j|||�q&W|S)z-Preprocess a dictionary of token definitions.)�_all_tokensr�r�r�)rlr2�	tokendefsr�r�r*r*r,�process_tokendefs
zRegexLexerMeta.process_tokendefc

Cs�i}i}x�|jD]�}|jjdi�}x�t|�D]�\}}|j|�}|dkr~|||<y|jt�}Wntk
rrw,YnX|||<q,|j|d�}|dkr�q,||||d�<y|jt�}	Wntk
r�Yq,X||	||<q,WqW|S)a
        Merge tokens from superclasses in MRO order, returning a single tokendef
        dictionary.

        Any state that is not defined by a subclass will be inherited
        automatically.  States that *are* defined by subclasses will, by
        default, override that state in the superclass.  If a subclass wishes to
        inherit definitions from a superclass, it can use the special value
        "inherit", which will cause the superclass' state definition to be
        included at that point in the state.
        r�Nrx)�__mro__�__dict__rAr�indexrr�r�)
rlr�Zinheritable�cZtoksr��itemsZcuritemsZinherit_ndxZnew_inh_ndxr*r*r,�
get_tokendefs
s0
zRegexLexerMeta.get_tokendefscOsLd|jkr:i|_d|_t|d�r(|jr(n|jd|j��|_tj	|f|�|�S)z:Instantiate cls after preprocessing its token definitions.�_tokensr�token_variantsre)
r�r�r��hasattrr�r�r�r�r0�__call__)rlrm�kwdsr*r*r,r�;s
zRegexLexerMeta.__call__)N)r5r6r7r8r�r�r�r�r�r�r�r*r*r*r,r��s#,
1r�c@s$eZdZdZejZiZddd�ZdS)rz�
    Base for simple stateful regular expression-based lexers.
    Simplifies the lexing process so that you need only
    provide a list of states and regular expressions.
    r�c
cs�d}|j}t|�}||d}�xn�xf|D]�\}}}	|||�}
|
r*|dk	r�t|�tkrh|||
j�fVnx|||
�D]
}|VqtW|
j�}|	dk	�rt|	t�r�xr|	D]8}|dkr�|j�q�|dkr�|j	|d	�q�|j	|�q�Wn0t|	t
�r�||	d�=n|	dk�r|j	|d
�n||d}Pq*WyN||dk�rXdg}|d}|tdfV|d7}w"|t||fV|d7}Wq"t
k
�r�PYq"Xq"WdS)z}
        Split ``text`` into (tokentype, text) pairs.

        ``stack`` is the inital stack (default: ``['root']``)
        rrxNz#popz#pushrMr�r�r�r�r�)r�r�r0r	rursrHrkr�rIr�rrrt)
rCrJr�rzr�Z
statestack�statetokens�rexmatchr~r��mr�r�r*r*r,rOhsN







z!RegexLexer.get_tokens_unprocessedN�r�)r�)	r5r6r7r8r��	MULTILINEr�r�rOr*r*r*r,rIsc@s"eZdZdZddd�Zdd�ZdS)rz9
    A helper object that holds lexer position data.
    NcCs*||_||_|pt|�|_|p"dg|_dS)Nr�)rJrzrWrsr�)rCrJrzr�rsr*r*r,rE�szLexerContext.__init__cCsd|j|j|jfS)NzLexerContext(%r, %r, %r))rJrzr�)rCr*r*r,rG�szLexerContext.__repr__)NN)r5r6r7r8rErGr*r*r*r,r�s
c@seZdZdZddd�ZdS)rzE
    A RegexLexer that uses a context object to store its state.
    Nccs
|j}|st|d�}|d}n|}||jd}|j}�x̐x�|D�],\}}}|||j|j�}	|	rB|dk	r�t|�tkr�|j||	j�fV|	j�|_n.x|||	|�D]
}
|
Vq�W|s�||jd	}|dk	�rnt	|t
��r(x�|D]D}|dk�r�|jj�q�|dk�r|jj|jd
�q�|jj|�q�Wn8t	|t
��rB|j|d�=n|dk�r`|jj|jd�n||jd}PqBWyt|j|jk�r�P||jdk�r�dg|_|d}|jtdfV|jd7_w:|jt||jfV|jd7_Wq:tk
�rPYq:Xq:WdS)
z
        Split ``text`` into (tokentype, text) pairs.
        If ``context`` is given, use this lexer context instead.
        rr�rxNz#popz#pushrMr�r�r�r�r�)r�rr�rJrzrsr0r	rurHrkr�rIr�rrrt)rCrJ�contextr�r}r�r�r~r�r�r�r�r*r*r,rO�s\







z)ExtendedRegexLexer.get_tokens_unprocessed)NN)r5r6r7r8rOr*r*r*r,r�sccs�t|�}yt|�\}}Wn(tk
r@x|D]
}|Vq.WdSXd}d}x�|D]�\}}}	|dkrf|}d}
x�|o~|t|	�|k�r|	|
||�}|||fV|t|�7}x*|D]"\}}
}||
|fV|t|�7}q�W||}
yt|�\}}Wqltk
�r
d}PYqlXqlW|||	|
d�fV|t|	�|
7}qPWxr|�r�|�pHd}x,|D]$\}}}	|||	fV|t|	�7}�qPWyt|�\}}Wntk
�r�d}PYnX�q:WdS)ag
    Helper for lexers which must combine the results of several
    sublexers.

    ``insertions`` is a list of ``(index, itokens)`` pairs.
    Each ``itokens`` iterable should be inserted at position
    ``index`` into the token stream given by the ``tokens``
    argument.

    The result is a combined token stream.

    TODO: clean up the code here.
    NTrF)�iter�next�
StopIterationrW)rgr�r�r�r�ZrealposZinsleftrhrQrRZoldiZtmpvalZit_indexZit_tokenZit_value�pr*r*r,rf�sL


rfc@seZdZdZdd�ZdS)�ProfilingRegexLexerMetaz>Metaclass for ProfilingRegexLexer, collects regex timing info.csLt|t�r t|j|j|jd��n|�tj�|��tjf����fdd�	}|S)N)r�r�cs`�jdj��fddg�}tj�}�j|||�}tj�}|dd7<|d||7<|S)Nrxrgr�)�
_prof_data�
setdefault�timer|)rJrz�endpos�infoZt0�resZt1)rl�compiledr�r�r*r,�
match_func@sz:ProfilingRegexLexerMeta._process_regex.<locals>.match_func)	rHrrr�r�r�r��sys�maxsize)rlr�r�r�r�r*)rlr�r�r�r,r�8s

z&ProfilingRegexLexerMeta._process_regexN)r5r6r7r8r�r*r*r*r,r�5sr�c@s"eZdZdZgZdZddd�ZdS)	�ProfilingRegexLexerzFDrop-in replacement for RegexLexer that does profiling of its regexes.�r�c#s��jjji�xtj�||�D]
}|VqW�jjj�}tdd�|j�D��fdd�dd�}tdd�|D��}t	�t	d�jj
t|�|f�t	d	d
�t	dd�t	dd
�x|D]}t	d|�q�Wt	d	d
�dS)NcssN|]F\\}}\}}|t|�jd�jdd�dd�|d|d||fVqdS)zu'z\\�\N�Ai�)�reprrYrL)�.0r��r�nrQr*r*r,�	<genexpr>Xsz=ProfilingRegexLexer.get_tokens_unprocessed.<locals>.<genexpr>cs
|�jS)N)�_prof_sort_index)r+)rCr*r,r-[sz<ProfilingRegexLexer.get_tokens_unprocessed.<locals>.<lambda>T)�key�reversecss|]}|dVqdS)�Nr*)r�r+r*r*r,r�]sz2Profiling result for %s lexing %d chars in %.3f ms�=�nz$%-20s %-64s ncalls  tottime  percallr�r��-z%-20s %-65s %5d %8.4f %8.4f)r�r�)rFr�rIrrOr��sortedr��sum�printr5rW)rCrJr��tokZrawdatarZ	sum_totalr4r*)rCr,rORs$



z*ProfilingRegexLexer.get_tokens_unprocessedN�r�)r�)r5r6r7r8r�r�rOr*r*r*r,r�Ksr�)r r!)r"r#)r$r%)r&r')r(r))9r8Z
__future__rr�r�r�Zpygments.filterrrZpygments.filtersrZpygments.tokenrrrr	Z
pygments.utilr
rrr
rrrrrZpygments.regexoptr�__all__rU�staticmethodZ_default_analyser0r.�objectrrr�rrirrkrjrnrr�rrrrr�rrrrfr�r�r*r*r*r,�<module>
sX,'
2)WE?

Filemanager

Name Type Size Permission Actions
__init__.cpython-36.opt-1.pyc File 2.98 KB 0644
__init__.cpython-36.pyc File 2.98 KB 0644
cmdline.cpython-36.opt-1.pyc File 12.1 KB 0644
cmdline.cpython-36.pyc File 12.1 KB 0644
console.cpython-36.opt-1.pyc File 1.85 KB 0644
console.cpython-36.pyc File 1.85 KB 0644
filter.cpython-36.opt-1.pyc File 2.54 KB 0644
filter.cpython-36.pyc File 2.54 KB 0644
formatter.cpython-36.opt-1.pyc File 2.86 KB 0644
formatter.cpython-36.pyc File 2.86 KB 0644
lexer.cpython-36.opt-1.pyc File 22.95 KB 0644
lexer.cpython-36.pyc File 23.56 KB 0644
modeline.cpython-36.opt-1.pyc File 1.09 KB 0644
modeline.cpython-36.pyc File 1.09 KB 0644
plugin.cpython-36.opt-1.pyc File 1.94 KB 0644
plugin.cpython-36.pyc File 1.94 KB 0644
regexopt.cpython-36.opt-1.pyc File 2.73 KB 0644
regexopt.cpython-36.pyc File 2.73 KB 0644
scanner.cpython-36.opt-1.pyc File 3.38 KB 0644
scanner.cpython-36.pyc File 3.38 KB 0644
sphinxext.cpython-36.opt-1.pyc File 4.38 KB 0644
sphinxext.cpython-36.pyc File 4.38 KB 0644
style.cpython-36.opt-1.pyc File 3.61 KB 0644
style.cpython-36.pyc File 3.66 KB 0644
token.cpython-36.opt-1.pyc File 4.14 KB 0644
token.cpython-36.pyc File 4.14 KB 0644
unistring.cpython-36.opt-1.pyc File 25.67 KB 0644
unistring.cpython-36.pyc File 25.67 KB 0644
util.cpython-36.opt-1.pyc File 10.27 KB 0644
util.cpython-36.pyc File 10.27 KB 0644