Processes for running doctests#
This module controls the processes started by Sage that actually run the doctests.
EXAMPLES:
The following examples are used in doctesting this file:
sage: doctest_var = 42; doctest_var^2
1764
sage: R.<a> = ZZ[]
sage: a + doctest_var
a + 42
AUTHORS:
David Roe (2012-03-27) – initial version, based on Robert Bradshaw’s code.
Jeroen Demeyer (2013 and 2015) – major improvements to forking and logging
- class sage.doctest.forker.DocTestDispatcher(controller)#
Bases:
SageObject
Creates parallel
DocTestWorker
processes and dispatches doctesting tasks.- dispatch()#
Run the doctests for the controller’s specified sources, by calling
parallel_dispatch()
orserial_dispatch()
according to the--serial
option.EXAMPLES:
sage: from sage.doctest.control import DocTestController, DocTestDefaults sage: from sage.doctest.forker import DocTestDispatcher sage: from sage.doctest.reporting import DocTestReporter sage: from sage.doctest.util import Timer sage: from sage.env import SAGE_SRC sage: import os sage: freehom = os.path.join(SAGE_SRC, 'sage', 'modules', 'free_module_homspace.py') sage: bigo = os.path.join(SAGE_SRC, 'sage', 'rings', 'big_oh.py') sage: DC = DocTestController(DocTestDefaults(), [freehom, bigo]) sage: DC.expand_files_into_sources() sage: DD = DocTestDispatcher(DC) sage: DR = DocTestReporter(DC) sage: DC.reporter = DR sage: DC.dispatcher = DD sage: DC.timer = Timer().start() sage: DD.dispatch() sage -t .../sage/modules/free_module_homspace.py [... tests, ... s] sage -t .../sage/rings/big_oh.py [... tests, ... s]
- parallel_dispatch()#
Run the doctests from the controller’s specified sources in parallel.
This creates
DocTestWorker
subprocesses, while the master process checks for timeouts and collects and displays the results.EXAMPLES:
sage: from sage.doctest.control import DocTestController, DocTestDefaults sage: from sage.doctest.forker import DocTestDispatcher sage: from sage.doctest.reporting import DocTestReporter sage: from sage.doctest.util import Timer sage: from sage.env import SAGE_SRC sage: import os sage: crem = os.path.join(SAGE_SRC, 'sage', 'databases', 'cremona.py') sage: bigo = os.path.join(SAGE_SRC, 'sage', 'rings', 'big_oh.py') sage: DC = DocTestController(DocTestDefaults(), [crem, bigo]) sage: DC.expand_files_into_sources() sage: DD = DocTestDispatcher(DC) sage: DR = DocTestReporter(DC) sage: DC.reporter = DR sage: DC.dispatcher = DD sage: DC.timer = Timer().start() sage: DD.parallel_dispatch() sage -t .../databases/cremona.py [... tests, ... s] sage -t .../rings/big_oh.py [... tests, ... s]
If the
exitfirst=True
option is given, the results for a failing module will be immediately printed and any other ongoing tests canceled:sage: from tempfile import NamedTemporaryFile as NTF sage: with NTF(suffix=".py", mode="w+t") as f1, \ ....: NTF(suffix=".py", mode="w+t") as f2: ....: _ = f1.write("'''\nsage: import time; time.sleep(60)\n'''") ....: f1.flush() ....: _ = f2.write("'''\nsage: True\nFalse\n'''") ....: f2.flush() ....: DC = DocTestController(DocTestDefaults(exitfirst=True, ....: nthreads=2), ....: [f1.name, f2.name]) ....: DC.expand_files_into_sources() ....: DD = DocTestDispatcher(DC) ....: DR = DocTestReporter(DC) ....: DC.reporter = DR ....: DC.dispatcher = DD ....: DC.timer = Timer().start() ....: DD.parallel_dispatch() sage -t ... ********************************************************************** File "...", line 2, in ... Failed example: True Expected: False Got: True ********************************************************************** 1 item had failures: 1 of 1 in ... [1 test, 1 failure, ... s] Killing test ...
- serial_dispatch()#
Run the doctests from the controller’s specified sources in series.
There is no graceful handling for signals, no possibility of interrupting tests and no timeout.
EXAMPLES:
sage: from sage.doctest.control import DocTestController, DocTestDefaults sage: from sage.doctest.forker import DocTestDispatcher sage: from sage.doctest.reporting import DocTestReporter sage: from sage.doctest.util import Timer sage: from sage.env import SAGE_SRC sage: import os sage: homset = os.path.join(SAGE_SRC, 'sage', 'rings', 'homset.py') sage: ideal = os.path.join(SAGE_SRC, 'sage', 'rings', 'ideal.py') sage: DC = DocTestController(DocTestDefaults(), [homset, ideal]) sage: DC.expand_files_into_sources() sage: DD = DocTestDispatcher(DC) sage: DR = DocTestReporter(DC) sage: DC.reporter = DR sage: DC.dispatcher = DD sage: DC.timer = Timer().start() sage: DD.serial_dispatch() sage -t .../rings/homset.py [... tests, ... s] sage -t .../rings/ideal.py [... tests, ... s]
- class sage.doctest.forker.DocTestTask(source)#
Bases:
object
This class encapsulates the tests from a single source.
This class does not insulate from problems in the source (e.g. entering an infinite loop or causing a segfault), that has to be dealt with at a higher level.
INPUT:
source
– asage.doctest.sources.DocTestSource
instance.verbose
– boolean, controls reporting of progress bydoctest.DocTestRunner
.
EXAMPLES:
sage: from sage.doctest.forker import DocTestTask sage: from sage.doctest.sources import FileDocTestSource sage: from sage.doctest.control import DocTestDefaults, DocTestController sage: from sage.env import SAGE_SRC sage: import os sage: filename = os.path.join(SAGE_SRC,'sage','doctest','sources.py') sage: DD = DocTestDefaults() sage: FDS = FileDocTestSource(filename,DD) sage: DTT = DocTestTask(FDS) sage: DC = DocTestController(DD,[filename]) sage: ntests, results = DTT(options=DD) sage: ntests >= 300 or ntests True sage: sorted(results.keys()) ['cputime', 'err', 'failures', 'optionals', 'tests', 'walltime', 'walltime_skips']
- extra_globals = {}#
Extra objects to place in the global namespace in which tests are run. Normally this should be empty but there are special cases where it may be useful.
For example, in Sage versions 9.1 and earlier, on Python 3 add
long
as an alias forint
so that tests that use thelong
built-in (of which there are many) still pass. We did this so that the test suite could run on Python 3 while Python 2 was still the default.
- class sage.doctest.forker.DocTestWorker(source, options, funclist=[])#
Bases:
Process
The DocTestWorker process runs one
DocTestTask
for a given source. It returns messages about doctest failures (or all tests if verbose doctesting) through a pipe and returns results through amultiprocessing.Queue
instance (both these are created in thestart()
method).It runs the task in its own process-group, such that killing the process group kills this process together with its child processes.
The class has additional methods and attributes for bookkeeping by the master process. Except in
run()
, nothing from this class should be accessed by the child process.INPUT:
source
– aDocTestSource
instanceoptions
– an object representing doctest options.funclist
– a list of callables to be called at the start of the child process.
EXAMPLES:
sage: from sage.doctest.forker import DocTestWorker, DocTestTask sage: from sage.doctest.sources import FileDocTestSource sage: from sage.doctest.reporting import DocTestReporter sage: from sage.doctest.control import DocTestController, DocTestDefaults sage: from sage.env import SAGE_SRC sage: filename = os.path.join(SAGE_SRC,'sage','doctest','util.py') sage: DD = DocTestDefaults() sage: FDS = FileDocTestSource(filename,DD) sage: W = DocTestWorker(FDS, DD) sage: W.start() sage: DC = DocTestController(DD, filename) sage: reporter = DocTestReporter(DC) sage: W.join() # Wait for worker to finish sage: result = W.result_queue.get() sage: reporter.report(FDS, False, W.exitcode, result, "") [... tests, ... s]
- kill()#
Kill this worker. Return
True
if the signal(s) are sent successfully orFalse
if the worker process no longer exists.This method is only called if there is something wrong with the worker. Under normal circumstances, the worker is supposed to exit by itself after finishing.
The first time this is called, use
SIGQUIT
. This will trigger the cysignalsSIGQUIT
handler and try to print an enhanced traceback.Subsequent times, use
SIGKILL
. Also close the message pipe if it was still open.EXAMPLES:
sage: import time sage: from sage.doctest.forker import DocTestWorker, DocTestTask sage: from sage.doctest.sources import FileDocTestSource sage: from sage.doctest.reporting import DocTestReporter sage: from sage.doctest.control import DocTestController, DocTestDefaults sage: from sage.env import SAGE_SRC sage: filename = os.path.join(SAGE_SRC,'sage','doctest','tests','99seconds.rst') sage: DD = DocTestDefaults() sage: FDS = FileDocTestSource(filename,DD)
We set up the worker to start by blocking
SIGQUIT
, such that killing will fail initially:sage: from cysignals.pselect import PSelecter sage: import signal sage: def block_hup(): ....: # We never __exit__() ....: PSelecter([signal.SIGQUIT]).__enter__() sage: W = DocTestWorker(FDS, DD, [block_hup]) sage: W.start() sage: W.killed False sage: W.kill() True sage: W.killed True sage: time.sleep(float(0.2)) # Worker doesn't die sage: W.kill() # Worker dies now True sage: time.sleep(1) sage: W.is_alive() False
- read_messages()#
In the master process, read from the pipe and store the data read in the
messages
attribute.Note
This function may need to be called multiple times in order to read all of the messages.
EXAMPLES:
sage: from sage.doctest.forker import DocTestWorker, DocTestTask sage: from sage.doctest.sources import FileDocTestSource sage: from sage.doctest.reporting import DocTestReporter sage: from sage.doctest.control import DocTestController, DocTestDefaults sage: from sage.env import SAGE_SRC sage: filename = os.path.join(SAGE_SRC,'sage','doctest','util.py') sage: DD = DocTestDefaults(verbose=True,nthreads=2) sage: FDS = FileDocTestSource(filename,DD) sage: W = DocTestWorker(FDS, DD) sage: W.start() sage: while W.rmessages is not None: ....: W.read_messages() sage: W.join() sage: len(W.messages) > 0 True
- run()#
Runs the
DocTestTask
under its own PGID.
- save_result_output()#
Annotate
self
withself.result
(the result read through theresult_queue
and withself.output
, the complete contents ofself.outtmpfile
. Then close the Queue andself.outtmpfile
.EXAMPLES:
sage: from sage.doctest.forker import DocTestWorker, DocTestTask sage: from sage.doctest.sources import FileDocTestSource sage: from sage.doctest.reporting import DocTestReporter sage: from sage.doctest.control import DocTestController, DocTestDefaults sage: from sage.env import SAGE_SRC sage: filename = os.path.join(SAGE_SRC,'sage','doctest','util.py') sage: DD = DocTestDefaults() sage: FDS = FileDocTestSource(filename,DD) sage: W = DocTestWorker(FDS, DD) sage: W.start() sage: W.join() sage: W.save_result_output() sage: sorted(W.result[1].keys()) ['cputime', 'err', 'failures', 'optionals', 'tests', 'walltime', 'walltime_skips'] sage: len(W.output) > 0 True
Note
This method is called from the parent process, not from the subprocess.
- start()#
Start the worker and close the writing end of the message pipe.
- class sage.doctest.forker.SageDocTestRunner(*args, **kwds)#
Bases:
DocTestRunner
,object
A customized version of DocTestRunner that tracks dependencies of doctests.
INPUT:
stdout
– an open file to restore for debuggingchecker
– None, or an instance ofdoctest.OutputChecker
verbose
– boolean, determines whether verbose printing is enabled.optionflags
– Controls the comparison with the expected output. Seetestmod
for more information.
EXAMPLES:
sage: from sage.doctest.parsing import SageOutputChecker sage: from sage.doctest.forker import SageDocTestRunner sage: from sage.doctest.control import DocTestDefaults; DD = DocTestDefaults() sage: import doctest, sys, os sage: DTR = SageDocTestRunner(SageOutputChecker(), verbose=False, sage_options=DD, optionflags=doctest.NORMALIZE_WHITESPACE|doctest.ELLIPSIS) sage: DTR <sage.doctest.forker.SageDocTestRunner object at ...>
- compile_and_execute(example, compiler, globs)#
Runs the given example, recording dependencies.
Rather than using a basic dictionary, Sage’s doctest runner uses a
sage.doctest.util.RecordingDict
, which records every time a value is set or retrieved. Executing the given code with this recording dictionary as the namespace allows Sage to track dependencies between doctest lines. For example, in the following two linessage: R.<x> = ZZ[] sage: f = x^2 + 1
the recording dictionary records that the second line depends on the first since the first INSERTS
x
into the global namespace and the second line RETRIEVESx
from the global namespace.INPUT:
example
– adoctest.Example
instance.compiler
– a callable that, applied to example, produces a code objectglobs
– a dictionary in which to execute the code.
OUTPUT:
the output of the compiled code snippet.
EXAMPLES:
sage: from sage.doctest.parsing import SageOutputChecker sage: from sage.doctest.forker import SageDocTestRunner sage: from sage.doctest.sources import FileDocTestSource sage: from sage.doctest.util import RecordingDict sage: from sage.doctest.control import DocTestDefaults; DD = DocTestDefaults() sage: from sage.env import SAGE_SRC sage: import doctest, sys, os, hashlib sage: DTR = SageDocTestRunner(SageOutputChecker(), verbose=False, sage_options=DD, optionflags=doctest.NORMALIZE_WHITESPACE|doctest.ELLIPSIS) sage: DTR.running_doctest_digest = hashlib.md5() sage: filename = os.path.join(SAGE_SRC,'sage','doctest','forker.py') sage: FDS = FileDocTestSource(filename,DD) sage: globs = RecordingDict(globals()) sage: 'doctest_var' in globs False sage: doctests, extras = FDS.create_doctests(globs) sage: ex0 = doctests[0].examples[0] sage: flags = 32768 if sys.version_info.minor < 8 else 524288 sage: compiler = lambda ex: compile(ex.source, '<doctest sage.doctest.forker[0]>', 'single', flags, 1) sage: DTR.compile_and_execute(ex0, compiler, globs) 1764 sage: globs['doctest_var'] 42 sage: globs.set {'doctest_var'} sage: globs.got {'Integer'}
Now we can execute some more doctests to see the dependencies.
sage: ex1 = doctests[0].examples[1] sage: compiler = lambda ex:compile(ex.source, '<doctest sage.doctest.forker[1]>', 'single', flags, 1) sage: DTR.compile_and_execute(ex1, compiler, globs) sage: sorted(list(globs.set)) ['R', 'a'] sage: globs.got {'ZZ'} sage: ex1.predecessors []
sage: ex2 = doctests[0].examples[2] sage: compiler = lambda ex:compile(ex.source, '<doctest sage.doctest.forker[2]>', 'single', flags, 1) sage: DTR.compile_and_execute(ex2, compiler, globs) a + 42 sage: list(globs.set) [] sage: sorted(list(globs.got)) ['a', 'doctest_var'] sage: set(ex2.predecessors) == set([ex0,ex1]) True
- report_failure(out, test, example, got, globs)#
Called when a doctest fails.
INPUT:
out
– a function for printingtest
– adoctest.DocTest
instanceexample
– adoctest.Example
instance intest
got
– a string, the result of runningexample
globs
– a dictionary of globals, used if in debugging mode
OUTPUT:
prints a report to
out
EXAMPLES:
sage: from sage.doctest.parsing import SageOutputChecker sage: from sage.doctest.forker import SageDocTestRunner sage: from sage.doctest.sources import FileDocTestSource sage: from sage.doctest.control import DocTestDefaults; DD = DocTestDefaults() sage: from sage.env import SAGE_SRC sage: import doctest, sys, os sage: DTR = SageDocTestRunner(SageOutputChecker(), verbose=True, sage_options=DD, optionflags=doctest.NORMALIZE_WHITESPACE|doctest.ELLIPSIS) sage: filename = os.path.join(SAGE_SRC,'sage','doctest','forker.py') sage: FDS = FileDocTestSource(filename,DD) sage: doctests, extras = FDS.create_doctests(globals()) sage: ex = doctests[0].examples[0] sage: DTR.no_failure_yet = True sage: DTR.report_failure(sys.stdout.write, doctests[0], ex, 'BAD ANSWER\n', {}) ********************************************************************** File ".../sage/doctest/forker.py", line 11, in sage.doctest.forker Failed example: doctest_var = 42; doctest_var^2 Expected: 1764 Got: BAD ANSWER
If debugging is turned on this function starts an IPython prompt when a test returns an incorrect answer:
sage: sage0.quit() sage: _ = sage0.eval("import doctest, sys, os, multiprocessing, subprocess") sage: _ = sage0.eval("from sage.doctest.parsing import SageOutputChecker") sage: _ = sage0.eval("import sage.doctest.forker as sdf") sage: _ = sage0.eval("from sage.doctest.control import DocTestDefaults") sage: _ = sage0.eval("DD = DocTestDefaults(debug=True)") sage: _ = sage0.eval("ex1 = doctest.Example('a = 17', '')") sage: _ = sage0.eval("ex2 = doctest.Example('2*a', '1')") sage: _ = sage0.eval("DT = doctest.DocTest([ex1,ex2], globals(), 'doubling', None, 0, None)") sage: _ = sage0.eval("DTR = sdf.SageDocTestRunner(SageOutputChecker(), verbose=False, sage_options=DD, optionflags=doctest.NORMALIZE_WHITESPACE|doctest.ELLIPSIS)") sage: print(sage0.eval("sdf.init_sage(); DTR.run(DT, clear_globs=False)")) # indirect doctest ********************************************************************** Line 1, in doubling Failed example: 2*a Expected: 1 Got: 34 ********************************************************************** Previously executed commands: sage: sage0._expect.expect('sage: ') # sage0 just mis-identified the output as prompt, synchronize 0 sage: sage0.eval("a") '...17' sage: sage0.eval("quit") 'Returning to doctests...TestResults(failed=1, attempted=2)'
- report_overtime(out, test, example, got, check_duration)#
Called when the
warn_long
option flag is set and a doctest runs longer than the specified time.INPUT:
out
– a function for printingtest
– adoctest.DocTest
instanceexample
– adoctest.Example
instance intest
got
– a string, the result of runningexample
check_duration
– number (default:0
) time spent for checking the test output
OUTPUT:
prints a report to
out
EXAMPLES:
sage: from sage.doctest.parsing import SageOutputChecker sage: from sage.doctest.forker import SageDocTestRunner sage: from sage.doctest.sources import FileDocTestSource sage: from sage.doctest.control import DocTestDefaults; DD = DocTestDefaults() sage: from sage.misc.misc import walltime sage: from sage.env import SAGE_SRC sage: import doctest, sys, os sage: DTR = SageDocTestRunner(SageOutputChecker(), verbose=True, sage_options=DD, optionflags=doctest.NORMALIZE_WHITESPACE|doctest.ELLIPSIS) sage: filename = os.path.join(SAGE_SRC,'sage','doctest','forker.py') sage: FDS = FileDocTestSource(filename,DD) sage: doctests, extras = FDS.create_doctests(globals()) sage: ex = doctests[0].examples[0] sage: ex.walltime = 1.23 sage: DTR.report_overtime(sys.stdout.write, doctests[0], ex, 'BAD ANSWER\n', check_duration=2.34) ********************************************************************** File ".../sage/doctest/forker.py", line 11, in sage.doctest.forker Warning, slow doctest: doctest_var = 42; doctest_var^2 Test ran for 1.23 s, check ran for 2.34 s
- report_start(out, test, example)#
Called when an example starts.
INPUT:
out
– a function for printingtest
– adoctest.DocTest
instanceexample
– adoctest.Example
instance intest
OUTPUT:
prints a report to
out
EXAMPLES:
sage: from sage.doctest.parsing import SageOutputChecker sage: from sage.doctest.forker import SageDocTestRunner sage: from sage.doctest.sources import FileDocTestSource sage: from sage.doctest.control import DocTestDefaults; DD = DocTestDefaults() sage: from sage.env import SAGE_SRC sage: import doctest, sys, os sage: DTR = SageDocTestRunner(SageOutputChecker(), verbose=True, sage_options=DD, optionflags=doctest.NORMALIZE_WHITESPACE|doctest.ELLIPSIS) sage: filename = os.path.join(SAGE_SRC,'sage','doctest','forker.py') sage: FDS = FileDocTestSource(filename,DD) sage: doctests, extras = FDS.create_doctests(globals()) sage: ex = doctests[0].examples[0] sage: DTR.report_start(sys.stdout.write, doctests[0], ex) Trying (line 11): doctest_var = 42; doctest_var^2 Expecting: 1764
- report_success(out, test, example, got, check_duration)#
Called when an example succeeds.
INPUT:
out
– a function for printingtest
– adoctest.DocTest
instanceexample
– adoctest.Example
instance intest
got
– a string, the result of runningexample
check_duration
– number (default:0
) time spent for checking the test output
OUTPUT:
prints a report to
out
if in debugging mode, starts an IPython prompt at the point of the failure
EXAMPLES:
sage: from sage.doctest.parsing import SageOutputChecker sage: from sage.doctest.forker import SageDocTestRunner sage: from sage.doctest.sources import FileDocTestSource sage: from sage.doctest.control import DocTestDefaults; DD = DocTestDefaults() sage: from sage.misc.misc import walltime sage: from sage.env import SAGE_SRC sage: import doctest, sys, os sage: DTR = SageDocTestRunner(SageOutputChecker(), verbose=True, sage_options=DD, optionflags=doctest.NORMALIZE_WHITESPACE|doctest.ELLIPSIS) sage: filename = os.path.join(SAGE_SRC,'sage','doctest','forker.py') sage: FDS = FileDocTestSource(filename,DD) sage: doctests, extras = FDS.create_doctests(globals()) sage: ex = doctests[0].examples[0] sage: ex.walltime = 0.0 sage: DTR.report_success(sys.stdout.write, doctests[0], ex, '1764') ok [0.00 s]
- report_unexpected_exception(out, test, example, exc_info)#
Called when a doctest raises an exception that’s not matched by the expected output.
If debugging has been turned on, starts an interactive debugger.
INPUT:
out
– a function for printingtest
– adoctest.DocTest
instanceexample
– adoctest.Example
instance intest
exc_info
– the result ofsys.exc_info()
OUTPUT:
prints a report to
out
if in debugging mode, starts PDB with the given traceback
EXAMPLES:
sage: sage0.quit() sage: _ = sage0.eval("import doctest, sys, os, multiprocessing, subprocess") sage: _ = sage0.eval("from sage.doctest.parsing import SageOutputChecker") sage: _ = sage0.eval("import sage.doctest.forker as sdf") sage: _ = sage0.eval("from sage.doctest.control import DocTestDefaults") sage: _ = sage0.eval("DD = DocTestDefaults(debug=True)") sage: _ = sage0.eval("ex = doctest.Example('E = EllipticCurve([0,0]); E', 'A singular Elliptic Curve')") sage: _ = sage0.eval("DT = doctest.DocTest([ex], globals(), 'singular_curve', None, 0, None)") sage: _ = sage0.eval("DTR = sdf.SageDocTestRunner(SageOutputChecker(), verbose=False, sage_options=DD, optionflags=doctest.NORMALIZE_WHITESPACE|doctest.ELLIPSIS)") sage: old_prompt = sage0._prompt sage: sage0._prompt = r"\(Pdb\) " sage: sage0.eval("DTR.run(DT, clear_globs=False)") # indirect doctest '... ArithmeticError(self._equation_string() + " defines a singular curve")' sage: sage0.eval("l") '...if self.discriminant() == 0:...raise ArithmeticError...' sage: sage0.eval("u") '...EllipticCurve_field.__init__(self, K, ainvs)' sage: sage0.eval("p ainvs") '(0, 0, 0, 0, 0)' sage: sage0._prompt = old_prompt sage: sage0.eval("quit") 'TestResults(failed=1, attempted=1)'
- run(test, compileflags=0, out=None, clear_globs=True)#
Runs the examples in a given doctest.
This function replaces
doctest.DocTestRunner.run
since it needs to handle spoofing. It also leaves the display hook in place.INPUT:
test
– an instance ofdoctest.DocTest
compileflags
– int (default: 0) the set of compiler flags used to execute examples (passed in to thecompile()
).out
– a function for writing the output (defaults tosys.stdout.write()
).clear_globs
– boolean (default True): whether to clear the namespace after running this doctest.
OUTPUT:
f
– integer, the number of examples that failedt
– the number of examples tried
EXAMPLES:
sage: from sage.doctest.parsing import SageOutputChecker sage: from sage.doctest.forker import SageDocTestRunner sage: from sage.doctest.sources import FileDocTestSource sage: from sage.doctest.control import DocTestDefaults; DD = DocTestDefaults() sage: from sage.env import SAGE_SRC sage: import doctest, sys, os sage: DTR = SageDocTestRunner(SageOutputChecker(), verbose=False, sage_options=DD, optionflags=doctest.NORMALIZE_WHITESPACE|doctest.ELLIPSIS) sage: filename = os.path.join(SAGE_SRC,'sage','doctest','forker.py') sage: FDS = FileDocTestSource(filename,DD) sage: doctests, extras = FDS.create_doctests(globals()) sage: DTR.run(doctests[0], clear_globs=False) TestResults(failed=0, attempted=4)
- summarize(verbose=None)#
Print results of testing to
self.msgfile
and return number of failures and tests run.INPUT:
verbose
– whether to print lots of stuff
OUTPUT:
returns
(f, t)
, adoctest.TestResults
instance giving the number of failures and the total number of tests run.
EXAMPLES:
sage: from sage.doctest.parsing import SageOutputChecker sage: from sage.doctest.forker import SageDocTestRunner sage: from sage.doctest.control import DocTestDefaults; DD = DocTestDefaults() sage: import doctest, sys, os sage: DTR = SageDocTestRunner(SageOutputChecker(), verbose=False, sage_options=DD, optionflags=doctest.NORMALIZE_WHITESPACE|doctest.ELLIPSIS) sage: DTR._name2ft['sage.doctest.forker'] = (1,120) sage: results = DTR.summarize() ********************************************************************** 1 item had failures: 1 of 120 in sage.doctest.forker sage: results TestResults(failed=1, attempted=120)
- update_digests(example)#
Update global and doctest digests.
Sage’s doctest runner tracks the state of doctests so that their dependencies are known. For example, in the following two lines
sage: R.<x> = ZZ[] sage: f = x^2 + 1
it records that the second line depends on the first since the first INSERTS
x
into the global namespace and the second line RETRIEVESx
from the global namespace.This function updates the hashes that record these dependencies.
INPUT:
example
– adoctest.Example
instance
EXAMPLES:
sage: from sage.doctest.parsing import SageOutputChecker sage: from sage.doctest.forker import SageDocTestRunner sage: from sage.doctest.sources import FileDocTestSource sage: from sage.doctest.control import DocTestDefaults; DD = DocTestDefaults() sage: from sage.env import SAGE_SRC sage: import doctest, sys, os, hashlib sage: DTR = SageDocTestRunner(SageOutputChecker(), verbose=False, sage_options=DD, optionflags=doctest.NORMALIZE_WHITESPACE|doctest.ELLIPSIS) sage: filename = os.path.join(SAGE_SRC,'sage','doctest','forker.py') sage: FDS = FileDocTestSource(filename,DD) sage: doctests, extras = FDS.create_doctests(globals()) sage: DTR.running_global_digest.hexdigest() 'd41d8cd98f00b204e9800998ecf8427e' sage: DTR.running_doctest_digest = hashlib.md5() sage: ex = doctests[0].examples[0]; ex.predecessors = None sage: DTR.update_digests(ex) sage: DTR.running_global_digest.hexdigest() '3cb44104292c3a3ab4da3112ce5dc35c'
- update_results(D)#
When returning results we pick out the results of interest since many attributes are not pickleable.
INPUT:
D
– a dictionary to update with cputime and walltime
OUTPUT:
the number of failures (or False if there is no failure attribute)
EXAMPLES:
sage: from sage.doctest.parsing import SageOutputChecker sage: from sage.doctest.forker import SageDocTestRunner sage: from sage.doctest.sources import FileDocTestSource, DictAsObject sage: from sage.doctest.control import DocTestDefaults; DD = DocTestDefaults() sage: from sage.env import SAGE_SRC sage: import doctest, sys, os sage: DTR = SageDocTestRunner(SageOutputChecker(), verbose=False, sage_options=DD, optionflags=doctest.NORMALIZE_WHITESPACE|doctest.ELLIPSIS) sage: filename = os.path.join(SAGE_SRC,'sage','doctest','forker.py') sage: FDS = FileDocTestSource(filename,DD) sage: doctests, extras = FDS.create_doctests(globals()) sage: from sage.doctest.util import Timer sage: T = Timer().start() sage: DTR.run(doctests[0]) TestResults(failed=0, attempted=4) sage: T.stop().annotate(DTR) sage: D = DictAsObject({'cputime':[],'walltime':[],'err':None}) sage: DTR.update_results(D) 0 sage: sorted(list(D.items())) [('cputime', [...]), ('err', None), ('failures', 0), ('tests', 4), ('walltime', [...]), ('walltime_skips', 0)]
- class sage.doctest.forker.SageSpoofInOut(outfile=None, infile=None)#
Bases:
SageObject
We replace the standard
doctest._SpoofOut
for three reasons:we need to divert the output of C programs that don’t print through sys.stdout,
we want the ability to recover partial output from doctest processes that segfault.
we also redirect stdin (usually from /dev/null) during doctests.
This class defines streams
self.real_stdin
,self.real_stdout
andself.real_stderr
which refer to the original streams.INPUT:
outfile
– (default:tempfile.TemporaryFile()
) a seekable open file object to which stdout and stderr should be redirected.infile
– (default:open(os.devnull)
) an open file object from which stdin should be redirected.
EXAMPLES:
sage: import subprocess, tempfile sage: from sage.doctest.forker import SageSpoofInOut sage: O = tempfile.TemporaryFile() sage: S = SageSpoofInOut(O) sage: try: ....: S.start_spoofing() ....: print("hello world") ....: finally: ....: S.stop_spoofing() ....: sage: S.getvalue() 'hello world\n' sage: _ = O.seek(0) sage: S = SageSpoofInOut(outfile=sys.stdout, infile=O) sage: try: ....: S.start_spoofing() ....: _ = subprocess.check_call("cat") ....: finally: ....: S.stop_spoofing() ....: hello world sage: O.close()
- getvalue()#
Gets the value that has been printed to
outfile
since the last time this function was called.EXAMPLES:
sage: from sage.doctest.forker import SageSpoofInOut sage: S = SageSpoofInOut() sage: try: ....: S.start_spoofing() ....: print("step 1") ....: finally: ....: S.stop_spoofing() ....: sage: S.getvalue() 'step 1\n' sage: try: ....: S.start_spoofing() ....: print("step 2") ....: finally: ....: S.stop_spoofing() ....: sage: S.getvalue() 'step 2\n'
- start_spoofing()#
Set stdin to read from
self.infile
and stdout to print toself.outfile
.EXAMPLES:
sage: import os, tempfile sage: from sage.doctest.forker import SageSpoofInOut sage: O = tempfile.TemporaryFile() sage: S = SageSpoofInOut(O) sage: try: ....: S.start_spoofing() ....: print("this is not printed") ....: finally: ....: S.stop_spoofing() ....: sage: S.getvalue() 'this is not printed\n' sage: _ = O.seek(0) sage: S = SageSpoofInOut(infile=O) sage: try: ....: S.start_spoofing() ....: v = sys.stdin.read() ....: finally: ....: S.stop_spoofing() ....: sage: v 'this is not printed\n'
We also catch non-Python output:
sage: try: ....: S.start_spoofing() ....: retval = os.system('''echo "Hello there"\nif [ $? -eq 0 ]; then\necho "good"\nfi''') ....: finally: ....: S.stop_spoofing() ....: sage: S.getvalue() 'Hello there\ngood\n' sage: O.close()
- stop_spoofing()#
Reset stdin and stdout to their original values.
EXAMPLES:
sage: from sage.doctest.forker import SageSpoofInOut sage: S = SageSpoofInOut() sage: try: ....: S.start_spoofing() ....: print("this is not printed") ....: finally: ....: S.stop_spoofing() ....: sage: print("this is now printed") this is now printed
- class sage.doctest.forker.TestResults(failed, attempted)#
Bases:
tuple
- attempted#
Alias for field number 1
- failed#
Alias for field number 0
- sage.doctest.forker.dummy_handler(sig, frame)#
Dummy signal handler for SIGCHLD (just to ensure the signal isn’t ignored).
- sage.doctest.forker.init_sage(controller=None)#
Import the Sage library.
This function is called once at the beginning of a doctest run (rather than once for each file). It imports the Sage library, sets DOCTEST_MODE to True, and invalidates any interfaces.
EXAMPLES:
sage: from sage.doctest.forker import init_sage sage: sage.doctest.DOCTEST_MODE = False sage: init_sage() sage: sage.doctest.DOCTEST_MODE True
Check that pexpect interfaces are invalidated, but still work:
sage: gap.eval("my_test_var := 42;") '42' sage: gap.eval("my_test_var;") '42' sage: init_sage() sage: gap('Group((1,2,3)(4,5), (3,4))') Group( [ (1,2,3)(4,5), (3,4) ] ) sage: gap.eval("my_test_var;") Traceback (most recent call last): ... RuntimeError: Gap produced error output...
Check that SymPy equation pretty printer is limited in doctest mode to default width (80 chars):
sage: from sympy import sympify # optional - sage.symbolic sage: from sympy.printing.pretty.pretty import PrettyPrinter # optional - sage.symbolic sage: s = sympify('+x^'.join(str(i) for i in range(30))) # optional - sage.symbolic sage: print(PrettyPrinter(settings={'wrap_line': True}).doprint(s)) # optional - sage.symbolic 29 28 27 26 25 24 23 22 21 20 19 18 17 x + x + x + x + x + x + x + x + x + x + x + x + x + 16 15 14 13 12 11 10 9 8 7 6 5 4 3 x + x + x + x + x + x + x + x + x + x + x + x + x + x + x 2 + x
The displayhook sorts dictionary keys to simplify doctesting of dictionary output:
sage: {'a':23, 'b':34, 'au':56, 'bbf':234, 'aaa':234} {'a': 23, 'aaa': 234, 'au': 56, 'b': 34, 'bbf': 234}
- sage.doctest.forker.showwarning_with_traceback(message, category, filename, lineno, file=None, line=None)#
Displays a warning message with a traceback.
INPUT: see
warnings.showwarning()
.OUTPUT: None
EXAMPLES:
sage: from sage.doctest.forker import showwarning_with_traceback sage: showwarning_with_traceback("bad stuff", UserWarning, "myfile.py", 0) doctest:warning ... File "<doctest sage.doctest.forker.showwarning_with_traceback[1]>", line 1, in <module> showwarning_with_traceback("bad stuff", UserWarning, "myfile.py", Integer(0)) : UserWarning: bad stuff