Coverage for pygments.lexers.special : 71%

Hot-keys on this page
r m x p toggle line displays
j k next/prev highlighted chunk
0 (zero) top of page
1 (one) first highlighted chunk
# -*- coding: utf-8 -*- pygments.lexers.special ~~~~~~~~~~~~~~~~~~~~~~~
Special lexers.
:copyright: Copyright 2006-2014 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """
""" "Null" lexer, doesn't highlight anything. """
""" Recreate a token stream formatted with the `RawTokenFormatter`. This lexer raises exceptions during parsing if the token stream in the file is malformed.
Additional options accepted:
`compress` If set to ``"gz"`` or ``"bz2"``, decompress the token stream with the given compression algorithm before lexing (default: ``""``). """
['', 'none', 'gz', 'bz2'], '')
# raw token stream never has any non-ASCII characters text = text.encode('ascii') import gzip gzipfile = gzip.GzipFile('', 'rb', 9, BytesIO(text)) text = gzipfile.read() import bz2 text = bz2.decompress(text)
# do not call Lexer.get_tokens() because we do not want Unicode # decoding to occur, and stripping is not optional.
else: ttype = _ttype_cache.get(ttypestr) if not ttype: ttype = Token ttypes = ttypestr.split('.')[1:] for ttype_ in ttypes: if not ttype_ or not ttype_[0].isupper(): raise ValueError('malformed token name') ttype = getattr(ttype, ttype_) _ttype_cache[ttypestr] = ttype val = val[2:-2].decode('unicode-escape') |