The context passed by tokenizer.
The context passed by tokenizer.
Takes the string input
and the offset in this string offset
and returns the next offset that is greater or equal
to offset
if reader matched or returns an offset that is less than offset
if reader didn't match. The reader may
return offsets that exceed the input
length.
const abcReader: Reader = (input, offset) => {
return input.startsWith('abc', offset) ? offset + 3 : -1;
};
The tokenizer stage type.
The context passed by tokenizer.
Returns the stage to which the tokenizer should transition.
The input chunk from which the current token was read.
The chunk-relative offset where the current token was read.
The number of chars read by the rule.
The context passed by tokenizer.
The current state of the tokenizer.
The stage to which the tokenizer should transition.
The type of tokens emitted by rules.
The context passed by tokenizer.
Triggered when a token was read from the input stream.
The substring of the current token:
const tokenValue = chunk.substr(offset, length);
The offset of this token from the start of the input stream (useful if you're using Tokenizer.write):
const absoluteOffset = state.chunkOffset + offset;
The type of the token that was read.
The input chunk from which the token was read.
The chunk-relative offset from the start of the input stream where the token starts.
The number of chars read by the rule.
The context passed by the tokenizer.
The current state of the tokenizer.
The singleton reader that always returns -1.
The singleton reader that always returns the current offset.
Creates a reader that repeatedly reads chars using reader
.
The context passed by tokenizer.
The reader that reads chars.
Reader options.
Creates a new pure tokenizer function.
The type of tokens emitted by the tokenizer.
The context that rules may consume.
The list of rules that tokenizer uses to read tokens from the input chunks.
Creates a new pure tokenizer function.
The type of tokens emitted by the tokenizer.
The type of stages at which rules are applied.
The context that rules may consume.
The list of rules that tokenizer uses to read tokens from the input chunks.
The initial state from which tokenization starts.
Creates a reader that returns the input length plus the offset.
The offset added to the input length.
Creates a reader that matches a substring.
The RegExp
to match.
Creates a reader that skips given number of chars.
The number of chars to skip.
Creates a reader that reads a substring from the input.
The text to match.
Reader options.
Converts the Reader instance to a function.
The context passed by tokenizer.
The reader to convert to a function.
Creates a reader that reads chars until reader
matches.
The context passed by tokenizer.
The reader that reads chars.
Reader options.
Generated using TypeDoc
The reader definition that can be compiled into a function that reads chars from the input string.