8. APIs¶
8.1. TCF run: testcase API and target manipulation during testcases¶
8.1.1. TCF’s backbone test case finder and runner¶
This implements a high level test case finder and runner to find and execute test cases.
The execution of each test case consists of the following phases:
- configure
- build
- target acquisition
- deploy
- one or more evaluation sequences (each consisting of setup, start, evaluation per se, teardown)
- clean [not executed by default]
the configuration, build and deployment phases happen in the local host; the evaluation phases can happen in the local host (for static tests) or in remote targets (for dynamic tests).
The backbone is designed so multiple drivers (test case drivers) can
be implemented that find test cases and specify to the backbone how to
build, run and test for success or failure. This is done by
subclassing tcfl.tc.tc_c
and extending or redefining
tc_c.is_testcase()
.
Testcases are defined also by subclassing tcfl.tc.tc_c
and
implementing the different methods the meta runner will call to
evaluate. Testcases can manipulate targets (if they need any) by using
the APIs defined by tcfl.tc.target_c
.
The runner will collect the list of available targets and determine whih testcases have to be run on which targets, and then create an instance of each testcase for those groups of targets where it has to run. All the instances will then be run in parallel throgh a multiprocessing pool.
To testcases report results via the report API, which are handled by
drivers defined following tcfl.tc.report_driver_c
, which can
be subclassed and extended to report to different destinations
according to need. Default drivers report to console and logfiles.
8.1.1.1. Testcase run identification¶
A message identification mechanism for which all the messages are prefixed with a code:
[RUNID:]HASH{CBDEL}[XX][.N]
- RUNID is a constant string to identify a run of a set of test cases (defaults to nothing, can be autogenerated with -i or an specific string given with -i RUNID).
- HASH is a base32 encoded hash of the testcase name, targets where it is to be run and their BSP model.
- CBDEL is one capital letter representing the phase being ran (Configure, Build, Deploy, Evaluation, cLean)
- [XX]: base32 encoded hash of the BSP name, applies only to dynamic test case builds per BSP.
This helps locating anything specific to a testcase by grepping in the logfile for a given string and adding more components restricts the output.
This means that the message IDs are stable across runs, save RUNIDs being specified.
We also use the RUNID:TC combination for a ticket when requesting a target lock–note this does not conflict with other users, as the tickets are namespaced per-user. This allows the server log to be used to crossreference what was being ran, to sort out issues.
The hash length (number of characters used) is controlled by
tcfl.tc.tc_c.hashid_len
.
-
tcfl.tc.
import_mp_pathos
()¶
-
tcfl.tc.
import_mp_std
()¶
-
exception
tcfl.tc.
exception
(description, attachments=None)¶ General base exception for reporting results of any phase of test cases
Parameters: - msg (str) – a message to report
- attachments (dict) –
a dictionary of items to report, with a few special fields:
- target: this is a
tcfl.tc.target_c
which shall be used for reporting - dlevel: this is an integer that indicates the relative level of verbosity (FIXME: link to detailed explanation)
- alevel: this is an integer that indicates the relative level of verbosity for attachments (FIXME: link to detailed explanation)
- any other fields will be passed verbatim and reported
- target: this is a
Have to use a dictionary (vs using kwargs) so the name of the keys can contain spaces, for better reporting.
-
attachments_get
()¶
-
attachments_update
(d)¶ Update an exception’s attachments
-
tag
= None¶
-
descr
()¶ Return the conceptual name of this exception in present tense
>>> pass_e().descr() >>> "pass" >>> fail_e().descr() >>> "fail" ...
-
descr_past
()¶ Return the conceptual name of this exception in past tense
>>> pass_e().descr() >>> "passed" >>> fail_e().descr() >>> "failed" ...
-
exception
tcfl.tc.
blocked_e
(description, attachments=None)¶ The test case could not be completed because something failed and disallowed testing if it woud pass or fail
-
tag
= 'BLCK'¶
-
-
exception
tcfl.tc.
error_e
(description, attachments=None)¶ Executing the test case found an error
-
tag
= 'ERRR'¶
-
-
exception
tcfl.tc.
skip_e
(description, attachments=None)¶ A decission was made to skip executing the test case
-
tag
= 'SKIP'¶
-
-
tcfl.tc.
valid_results
= {'BLCK': ('block', 'blocked'), 'ERRR': ('error', 'errored'), 'FAIL': ('fail', 'failed'), 'PASS': ('pass', 'passed'), 'SKIP': ('skip', 'skipped')}¶ List of valid results and translations in present and past tense
- pass: the testcase has passed (raise
tcfl.tc.pass_e
) - fail: the testcase found a problem it was looking for, like an
assertion failure or inconsistency in the code being exercised (raise
tcfl.tc.failed_e
) - errr: the testcase found a problem it was not looking for,
like a driver crash; raise
tcfl.tc.error_e
, - blck: the testcase has blocked due to an infrastructure issue
which forbids from telling if it passed, failed or errored (raise
tcfl.tc.blocked_e
) - skip: the testcase has detected a condition that deems it not
applicable and thus shall be skipped (raise
tcfl.tc.skip_e
)
- pass: the testcase has passed (raise
-
class
tcfl.tc.
target_extension_c
(_target)¶ Implement API extensions to targets
An API extension allows you to extend the API for the
tcfl.tc.target_c
class so that more functionality can be added to the target objects passed to testcase methods (like build*(), eval*(), etc) and used as:>>> class extension_a(target_extension_c): >>> def function(self): >>> self.target.report_info("Hello world from extension_a") >>> variable = 34 >>> ... >>> target_c.extension_register(etension_a) >>> ...
Now, in an (e.g) evaluation function in a testcase:
>>> @tcfl.tc.target() >>> @tcfl.tc.target() >>> class _test(tcfl.tc.tc_c): >>> >>> def eval_something(self, target, target1): >>> target1.extension_a.function() >>> if target1.extension_a.variable > 3: >>> do_something() >>> ...
Extensions have to be registered with
tcfl.tc.target_c.extension_register()
. Unregister withtcfl.tc.target_c.extension_unregister()
The extension can be anything, but are commonly used to provide the code to access APIs that are not distributed as part of the core TCF distribution (like for example, an API to access an special sensor).
A package might add support on the server side for an interface to access the target and on the client side to access said interfaces.
The __init__() method will typically first check if the target meets the criteria needed for the extension to work or be used. If not it can raise
target_extension_c.unneeded
to avoid the extension to be created.Then it proceeds to create an instance that will be attached to the target for later use.
-
exception
unneeded
¶ Raise this from __init__() if this extension is not needed for this target.
-
target
= None¶ Target this extension applies to
-
exception
-
class
tcfl.tc.
report_driver_c
¶ Reporting driver interface
To create a reporting driver, subclass this class, implement
report()
and then create an instance, adding it callingadd()
.A testcase reports information by calling the report_*() APIs in
reporter_c
, which multiplexes into each reporting driver registered withadd()
, calling each driversreport()
function which will direct it to the appropiate place.Drivers can be created to dump the information in any format and to whichever location, as needed.
For examples, look at:
-
report
(reporter, tag, ts, delta, level, message, alevel, attachments)¶ Low level report from testcases
The reporting API calls this function for the final recording of a reported message. In here basically anything can be done–but they being called frequently, it has to be efficient or will slow testcase execution considerably. Actions done in this function can be:
- filtering (to only run for certain testcases, log levels, tags or messages)
- dump data to a database
- record to separate files based on whichever logic
- etc
When a testcase is completed, it will issue a message COMPLETION <result>, which marks the end of the testcase.
When all the testcases are run, the global testcase reporter (
tcfl.tc.tc_global
) will issue a COMPLETION <result> message. The global testcase reporter can be identified because it has an attribute skip_reports set to True and thus can be identified with:if getattr(_tc, "skip_reports", False) == True: do_somethin_for_the_global_reporter
Important points:
do not rely on globals; this call is not lock protected for concurrency; will be called for every single report the internals of the test runner and the testcases do from multiple threads at the same time. Expect a lot of calls.
Must be ready to accept multiple threads calling from different contexts. It is a good idea to use thread local storage (TLS) to store state if needed. See an example in
tcfl.report_console.driver
).
Parameters: - reporter (reporter_c) – who is reporting this; this can be
a
testcase
or atarget
. - tag (str) – type of report (PASS, ERRR, FAIL, BLCK, INFO,
DATA); note they are all same length and described in
valid_results
. - ts (float) – timestamp for when the message got generated (in seconds)
- delta (float) – time lapse from when the testcase started execution until this message was generated.
- level (int) – report level for the message (versus for the attachments); note report levels greater or equal to 1000 are used to pass control messages, so they might not be subject to normal verbosity control (for example, for a log file you might want to always include them).
- message (str) –
single line string describing the message to report.
If the message starts with “COMPLETION “, this is the final message issued to mark the result of executing a single testcase. At this point, you can use fields such as
tc_c.result
andtc_c.result_eval
and it can be used as a synchronization point to for example, flush a file to disk or upload a complete record to a database.Python2: note this has been converted to unicode UTF-8
- alevel (int) – report level for the attachments
- attachments (dict) –
extra information to add to the message being reported; shall be reported as KEY: VALUE; VALUE shall be recursively reported:
- lists/tuples/sets shall be reported indicating the index of the member (such as KEYNAME[3]: VALUE
- dictionaries shall be recursively reported
- strings and integers shall be reported as themselves
- any other data type can be reported as what it’s repr function returns when converting it to unicode ro whatever representation the driver can do.
You can use functions such
commonl.data_dump_recursive()
to convert a dictionary to a unicode representation.This might contain strings that are not valid UTF-8, so you need to convert them using
commonl.mkutf8()
or similar.commonl.data_dump_recursive()
do that for you.
-
classmethod
add
(obj, origin=None)¶ Add a driver to handle other report mechanisms
A report driver is used by tcf run, the meta test runner, to report information about the execution of testcases.
A driver implements the reporting in whichever way it decides it needs to suit the application, uploading information to a server, writing it to files, printing it to screen, etc.
>>> class my_report_driver(tcfl.tc.report_driver_c) >>> ... >>> tcfl.tc.report_driver_c.add(my_report_driver())
Parameters: - obj (tcfl.tc.report_driver_c) – object subclasss of
tcfl.tc.report_driver_c
that implements the reporting. - origin (str) – (optional) where is this being registered; defaults to the caller of this function.
- obj (tcfl.tc.report_driver_c) – object subclasss of
-
classmethod
remove
(obj)¶ Remove a report driver previously added with
add()
Parameters: obj (tcfl.tc.report_driver_c) – object subclasss of tcfl.tc.report_driver_c
that implements the reporting.
-
classmethod
ident_simplify
(ident, runid, hashid)¶ If any looks like:
RUNID:HASHID[SOMETHING]simplify it by returning SOMETHING
-
-
class
tcfl.tc.
reporter_c
(testcase=None)¶ High level reporting API
Embedded as part of a target or testcase, allows them to report in a unified way
This class accesses members that are undefined in here but defined by the class that inherits it (tc_c and target_c):
- self.kws
-
report_pass
(message, attachments=None, level=None, dlevel=0, alevel=2)¶
-
report_fail
(message, attachments=None, level=None, dlevel=0, alevel=2)¶
-
report_error
(message, attachments=None, level=None, dlevel=0, alevel=2)¶
-
report_blck
(message, attachments=None, level=None, dlevel=0, alevel=2)¶
-
report_skip
(message, attachments=None, level=None, dlevel=0, alevel=2)¶
-
report_info
(message, attachments=None, level=None, dlevel=0, alevel=2)¶
-
report_data
(domain, name, value, expand=True, level=2, dlevel=0)¶ Report measurable data
When running a testcase, if data is collected that has to be reported for later analysis, use this function to report it. This will be reported by the report driver in a way that makes it easy to collect later on.
Measured data is identified by a domain and a name, plus then the actual value.
A way to picture how this data can look once aggregated is as a table per domain, on which each invocation is a row and each column will be the values for each name.
Parameters: - domain (str) – to which domain this measurement applies (eg: “Latency Benchmark %(type)s”);
- name (str) – name of the value (eg: “context switch (microseconds)”); it is recommended to always add the unit the measurement represents.
- value – value to report for the given domain and name; any type can be reported.
- expand (bool) –
(optional) by default, the domain and name fields will be %(FIELD)s expanded with the keywords of the testcase or target. If False, it will not be expanded.
This enables to, for example, specify a domain of “Latency measurements for target %(type)s” which will automatically create a different domain for each type of target.
-
report_tweet
(what, result, extra_report='', ignore_nothing=False, attachments=None, level=None, dlevel=0, alevel=2, dlevel_failed=0, dlevel_blocked=0, dlevel_passed=0, dlevel_skipped=0, dlevel_error=0)¶
-
class
tcfl.tc.
target_c
(rt, testcase, bsp_model, target_want_name, extensions_only=None)¶ A remote target that can be manipulated
Parameters: - rt (dict) – remote target descriptor (dictionary) as returned
by
tcfl.ttb_client.rest_target_find_all()
and others. - tescase (tc_c) – testcase descriptor to which this target instance will be uniquely assigned.
A target always operate in a given BSP model, as decided by the testcase runner. If a remote target A has two BSP models (1 and 2) and a testcase T shall be run on both, it will create two testcase instances, T1 and T2. Each will be assigned an instance of
target_c
, A1 and A2 respectively, representing the same target A, but each set to a different BSP model.Note these object expose the basic target API, but then different extensions provide APIs to access other interfaces, depending on if the target exposes it or not; these is the current list of implemented interfaces:
console
capture
for stream and snapshot captures of audio, video, network traffic, etcdebug
fastboot
images
ioc_flash_server_app
power
shell
ssh
tunnel
zephyr
-
want_name
= None¶ Name this target is known to by the testcase (as it was claimed with the
tcfl.tc.target()
decorator)
-
rt
= None¶ Remote tags of this target
-
id
= None¶ (short) id of this target
-
fullid
= None¶ Full id of this target
-
type
= None¶ Type name of this target
-
ticket
= None¶ ticket used to acquire this target
-
testcase
= None¶ Testcase that this target is currently executing
-
keep_active
= None¶ Make sure the testcase indicates the daemon that this target is to be marked as active during the testcase execution of expectation loops.
-
bsps_stub
= None¶ Dict of BSPs that have to be stubbed for the board to work correctly in the current BSP model (if the board has two BSPs but BSP1 needs to have an image of something so BSP2 can start). The App builder can manipulate this to remove BSPs that can be ignored. The value is a tuple (app, srcinfo) that indicates the App builder that will build the stub and with which source information (path to the source).
-
tmpdir
= None¶ Temporary directory where to store files – this is the same as the testcase’s – it is needed for the report driver to find where to put stuff.
-
kws
= None¶ Keywords for
%(KEY)[sd]
substitution specific to the testcase or target and its current active BSP model and BSP as set withbsp_set()
.FIXME: elaborate on testcase keywords, target keyworkds
These are obtained from the remote target descriptor (self.rt) as obtained from the remote ttbd server.
These can be used to generate strings based on information, as:
>>> print "Something %(id)s" % target.kws >>> target.shcmd_local("cp %(id)s.config final.config")
To find which fields are available for a target:
$ tcf list -vv TARGETNAME
The testcase will provide also other fields, in
tcfl.tc.tc_c.kws
, which are rolled in in this variable too. See how to more available keywords hereNote that testcases might be setting more keywords in the target or the testcase with:
>>> target.kw_set("keywordname", "somevalue") >>> self.kw_set("keywordname", "somevalue")
as well, any of the target’s properties set with
TARGET.property_set
(or it’s command line equivalenttcf property-set TARGET PROPERTY VALUE
) will show up as keywords.
-
kws_origin
= None¶ Origin of keys defined in self.kws
-
do_acquire
= None¶ Shall we acquire this target? By default the testcases get the targets they request acquired for exclusive use, but in some cases, it might not be needed (default: True)
-
lock
= None¶ Note this only applies mainly to _*set()* operations and remember other testcases can’t use the target.
-
classmethod
extension_register
(ext_cls, name=None)¶ Register an extension to the
tcfl.tc.target_c
class.This is usually called from a config file to register an extension provided by a package.
See
target_extension_c
for detailsParameters: - ext_cls (target_extension_c) – a class that provides an extension
- name (str) – (optional) name of the extension (defaults to the class name)
-
classmethod
extension_unregister
(ext_cls, name=None)¶ Unregister an extension to the
tcfl.tc.target_c
class.This is usually used by unit tests. There usually is no need to unregister extensions.
See
target_extension_c
for detailsParameters: - ext_cls (target_extension_c) – a class that provides an extension
- name (str) – (optional) name of the extension (defaults to the class name)
-
bsps_all
¶ Return a list of all BSPs in the target (note this might be more than the ones available in the currentlt selected BSP model).
-
bsp_set
(bsp=None)¶ Set the active BSP
If the BSP is omitted, this will select the first BSP in the current BSP model. This means that if there is a preference in multiple BSPs, they have to be listed as such in the target’s configuration.
If there are no BSPs, this will raise an exception
Parameters: bsp (str) – (optional) The name of any BSP supported by the board (not necessarily in the BSP-Model list of active BSPs. These are always in bsps_all
. If this argument is False, then the active BSP is reset to none.
-
kws_set
(d, bsp=None)¶ Set a bunch of target’s keywords and values
Parameters:
-
kw_set
(kw, val, bsp=None)¶ Set a target’s keyword and value
Parameters:
-
kw_unset
(kw, bsp=None)¶ Unset a target’s string keyword
Parameters:
-
kws_required_verify
(kws)¶ Verify if a target exports required keywords, raise blocked exception if any is missing.
-
ic_field_get
(ic, field, field_description='')¶ Obtain the value of a field for a target in an interconnect
A target might be a member of one or more interconnects, as described by its tags (interconnects section).
Parameters: - ic (tcfl.tc.target_c) – target describing the interconnect
of which this target is a member (as defined in a
@
tcfl.tc.interconnect()
decorator to the testcase class) - field (str) – name of the filed whose value we want.
>>> def eval_somestep(self, ic, target1, target2): >>> target1.shell.run("ifconfig eth0 %s/%s" >>> % (target2.addr_get(ic, 'ipv4'), >>> target2.ic_field_get(ic, 'ipv4_addr_len'))
- ic (tcfl.tc.target_c) – target describing the interconnect
of which this target is a member (as defined in a
@
-
addr_get
(ic, tech, instance=None)¶ Obtain the address for a target in an interconnect
A target might be a member of one or more interconnects, as described by its tags (interconnects section).
Parameters: - ic (tcfl.tc.target_c) – target describing the interconnect
of which this target is a member (as defined in a
@
tcfl.tc.interconnect()
decorator to the testcase class) - tech (str) –
name of the technology on which address we are interested.
As part of said membership, one or more key/value pairs can be specified. Assigned addresses are always called TECHNOLOGY_addr, were TECHNOLOGY can be things like ipv4, ipv6, bt, mac, etc…
If tech fits a whole key name, it will be used instead.
- instance (str) –
(optional) when this target has multiple connections to the same interconnect (via multiple physical or virtual network interfaces), you can select which instance of those it is wanted.
By default this will return the default instance (eg, the one corresponding to the interconnect
ICNAME
), but if an instance is added, it will return the IP address forICNAME#INSTANCE
as declared in the target’s configuration with functions such asttbl.test_target.add_to_interconnect()
.
When the target, for the current testcase is member of a single interconnect, any TECHNOLOGY_addr for the interconnect key/value will be available in the
kws
member as for example.>>> target.kws['ipv4_addr']
However, when member of multiple interconnects, which members are promoted to top level is undertermined if both interconnects provide address information for the same technology. Use this function to obtain the interconnect-specific information.
>>> def eval_somestep(self, ic, target1, target2): >>> target1.shell.run("scp /etc/passwd %s:/etc/passwd" >>> % target2.addr_get(ic, 'ipv4'))
- ic (tcfl.tc.target_c) – target describing the interconnect
of which this target is a member (as defined in a
@
-
app_get
(bsp=None, noraise=True)¶ Return the App builder that is assigned to a particular BSP in the target.
Parameters:
-
shcmd_local
(cmd, origin=None, reporter=None, logfile=None)¶ Run a shell command in the local machine, substituting %(KEYWORD)[sd] with keywords defined by the target and testcase.
-
acquire
()¶ Acquire a target
-
release
()¶ Release a target
-
active
()¶ Mark an owned target as active
For long running tests, indicate to the server that this target is still active.
-
property_get
(property_name, default=None)¶ Read a property from the target
Parameters: property_name (str) – Name of the property to read Returns str: value of the property (if set) or None
-
property_set
(property_name, value=None)¶ Set a property on the target
Parameters:
-
disable
(reason='disabled by the administrator')¶ Disable a target, setting an optional reason
Parameters: reason (str) – (optional) string describing the reason [default: none] This sets a field disabled in the inventory with the messages; convention is this means it is disabled.
-
enable
()¶ Enable a (maybe disabled) target
This removes the disabled field from the inventory.
-
thing_plug
(thing)¶ Connect a thing described in the target’s
tags
things dictionary to the target.Parameters: thing (str) – thing to connect
-
thing_unplug
(thing)¶ Disconnect a thing described in the target’s
tags
things dictionary from the target.Parameters: thing (str) – thing to disconnect
-
thing_list
()¶ Return a list of connected things
-
console_tx
(data, console=None)¶ Transmits the data over the given console
Parameters: - data – data to be sent; data can be anything that can be transformed into a sequence of bytes
- console (str) – (optional) name of console over which to send the data (otherwise use the default one).
Note this function is equivalent to
target.console.write
, which is the raw version of this function.See
send()
for a version that works with the expect sequence
-
crlf
¶ What will
target_c.send()
use for CR/LF when sending data to the target’s consoles. Defaults to\r\n
, but it can be set to any string, even""
for an empty string.
-
send
(data, console=None, crlf=None)¶ Like
console_tx()
, transmits the string of data over the given console.This function, however, differs in takes only strings and that it will append a CRLF sequence at the end of the given string. As well, it will flush the receive pipe so that next time we
expect()
something, it will be only for anything received after we called this function (so we’ll expect to see even the sending of the command).Parameters: - data (str) – string of data to send
- console (str) – (optional) name of console over which to send the data (otherwise use the default one).
- ctlf (str) –
(optional) CRLF technique to use, or what to append to the string as a CRLF:
- None: use whatever is in
target_c.crlf
\r
: use carriage return\r\n
: use carriage return and line feed\n
: use line feedANYSTRING
: append ANYSTRING
- None: use whatever is in
-
on_console_rx
(regex_or_str, timeout=None, console=None, result='pass')¶ Set up an action to perform (pass, fail, block or skip) when a string or regular expresison is received on a given console in this target.
Note this does not wait for said string; you need to run the testcase’s expecter loop with:
>>> self.tls.expecter.run()
Als well, those actions will be performed when running
expect()
orwait()
for blocking versions.This allows you to specify many different things you are waiting for from one or more targets and wait for all of them at the same time and block until all of them are received (or timeout).
Parameters: - regex_or_str – string or regular expression (compiled
with
re.compile()
. - timeout (int) – Seconds to wait for regex_or_str to be
received, raise
tcfl.tc.failed_e
otherwise. If False, no timeout check is done; if None, it is taken from the default timeout set by the testcase. - console (str) – (optional) name of console from which to receive the data
- result –
what to do when that regex_or_str is found on the given console:
- pass, (default) raise
tcfl.tc.pass_e
- block, raise
tcfl.tc.blocked_e
- error, raise
tcfl.tc.error_e
, - failed, raise
tcfl.tc.failed_e
, - blocked, raise
tcfl.tc.blocked_e
Note that when running an expecter loop, if seven different actions are added indicating they are expected to pass, the seven of them must have raised a pass exception (or indicated passage somehow) before the loop will consider it a full pass. See
tcfl.expecter.expecter_c.run()
. - pass, (default) raise
Raises: tcfl.tc.pass_e
,tcfl.tc.blocked_e
,tcfl.tc.failed_e
,tcfl.tc.error_e
,tcfl.tc.skip_e
, any other exception from runtimes.Returns: True if a poller for the console was added to the testcase’s expecter loop, False otherwise.
- regex_or_str – string or regular expression (compiled
with
-
wait
(regex_or_str, timeout=None, console=None)¶ Wait for a particular regex/string to be received on a given console of this target before a given timeout.
See
expect()
for a version that just raises exceptions when the output is not received.Parameters: - timeout (int) – Seconds to wait for regex_or_str to be
received, raise
tcfl.tc.error_e
otherwise. If False, no timeout check is done; if None, it is taken from the default timeout set by the testcase. - console (str) – (optional) name of console from which to receive the data
Returns: True if the output was received before the timeout, False otherwise.
- timeout (int) – Seconds to wait for regex_or_str to be
received, raise
-
expect
(regex_or_str, timeout=None, console=None, name=None, raise_on_timeout=<class 'tcfl.tc.failed_e'>, origin=None)¶ Wait for a particular regex/string to be received on a given console of this target before a given timeout.
Similar to
wait()
, it will raise an exception if @regex_or_str is not received before @timeout on @console.Parameters: - timeout (int) – Seconds to wait for regex_or_str to be
received, raise
tcfl.tc.error_e
otherwise. If False, no timeout check is done; if None, it is taken from the default timeout set by the testcase. - console (str) – (optional) name of console from which to receive the data
- origin (str) –
(optional) when reporting information about this expectation, what origin shall it list, eg:
- None (default) to get the current caller
- commonl.origin_get(2) also to get the current caller
- commonl.origin_get(1) also to get the current function
or something as:
>>> "somefilename:43"
Returns: Nothing, if the output is received.
Raises: tcfl.tc.blocked_e
on error,tcfl.tc.error_e
if not received, any other runtime exception.- timeout (int) – Seconds to wait for regex_or_str to be
received, raise
-
stub_app_add
(bsp, _app, app_src, app_src_options='')¶ Add App builder information for a BSP that has to be stubbed.
When running on a target that has multiple BSPs but some of then will not be used by the current BSP model, stubs might have to be added to those BSPs to make sure their CPUs are not going wild. Use this function to specify which app builder is going to be used, the path to the stub source and build options. The App building mechanism will take it from there.
An app builder might determine that a given BSP needs no stub; in said case it can remove it from the dict
bsps_stub()
with:>>> del target.bsps_stub[BSPNAME]
This is like the app information added by _target_app_add(), but it is stored in the target_c instance, not to the testcase class.
This is because the stubbing that has to be done is specific to each target (as the BSPs to stub each target have might be different depending on the target and BSP-model).
Note this information is only added if there is nothing existing about said BSP. To override, you need to delete and add:
>>> del target.bsps_stub[BSPNAME] >>> target.stub_app_add(BSPNAME, etc etc)
-
static
create_from_cmdline_args
(args, target_name=None, iface=None, extensions_only=None)¶ - Create a
tcfl.tc.target_c
object from command line - arguments
Parameters: - args (argparse.Namespace) – arguments from argparse
- target_name (str) – (optional) name of the target, by default is taken from args.target.
- iface (str) – (optional) target must support the given interface, otherwise an exception is raised.
Returns: instance of
tcfl.tc.target_c
representing said target, if it is available.- Create a
-
ttbd_iface_call
(interface, call, method='PUT', component=None, stream=False, raw=False, files=None, **kwargs)¶ Execute a general interface call to TTBD, the TCF remoting server
This allows to call any interface on the server that provides this target. It is used to implement higher level calls.
Parameters: - interface (str) – interface name (eg: “power”, “console”, etc); normally any new style interface listed in the target’s interfaces tag.
- call (str) – name of the call implemented by such interface (eg: for power, “on”, “off”); these are described on the interface’s implementation.
- method (str) – (optional, defaults to PUT); HTTP method to use to call; one of PUT, GET, DELETE, POST. The interface dictates the method.
- component (str) – (optional, default None) for interfaces that implement multiple components (a common pattern), specify which component the call applies to.
- files (dict) – (optional) dictionary of keys pointing to file names that have to be streamed to the server. Keys are strings with names of the files, values opened file descriptors (or iterables). FIXME: needs more clarification on how this works.
Rest of the arguments are a dictionary keyed by string with values that will be serialized to pass the remote call as arguments, and thus are interface specific.
Anything that is an interable or dictionary will be serialized as JSON. The rest are kept as body arguments so the daemon can decode it properly.
-
bsp_model_suffix
()¶
-
bsp_suffix
()¶
-
report_mk_prefix
()¶
- rt (dict) – remote target descriptor (dictionary) as returned
by
-
class
tcfl.tc.
target_group_c
(descr)¶ A unique group of targets (each set to an specific BSP model) assigned to to a testcase for execution.
A testcase can query a
tcfl.tc.target_c
instance of the remote target to manipualte it by declaring it as an argument to a testcase method, querying thetargets
dictionary or callingtarget()
:>>> @tcfl.tc.target(name = "mytarget") >>> class mytest(tcfl.tc.tc_c): >>> ... >>> >>> def eval_1(self, mytarget): >>> mytarget.power.cycle() >>> >>> def eval_2(self): >>> mytarget = self.target_group.target("mytarget") >>> mytarget.power.cycle() >>> >>> def eval_3(self): >>> mytarget = self.targets["mytarget"] >>> mytarget.power.cycle() >>>
-
name
¶
-
name_set
(tgid)¶
-
len
()¶ Return number of targets in the group
-
target
(target_name)¶ Return the instance of
tcfl.tc.target_c
that represents a remote target that met the specification requestedtcfl.tc.target()
decorator with name target_name
-
target_add
(target_name, _target)¶
-
targets
¶ Dictionary of
tcfl.tc.target_c
descriptor for a remote target keyed by the name they were requested with thetcfl.tc.target()
decorator.
-
-
class
tcfl.tc.
result_c
(passed=0, errors=0, failed=0, blocked=0, skipped=0)¶ -
total
()¶
-
summary
()¶
-
normalized
()¶
-
static
from_retval
(retval)¶
-
report
(tc, message, attachments=None, level=None, dlevel=0, alevel=2)¶
-
static
report_from_exception
(_tc, e, attachments=None, force_result=None)¶ Given an exception, report using the testcase or target report infrastructure on the exception, traces to it as well as any attachments it came with and return a valid
result_c
code.By default, this is the mapping:
tc.report_pass
is used forpass_e
tc.report_pass
is used forerror_e
tc.report_pass
is used forfailed_e
tc.report_pass
is used forblocked_e
and any other exceptiontc.report_pass
is used forskip_e
However, it can be forced by passing as force_result or each testcase can be told to consider specific exceptions as others per reporting using the
tcfl.tc.tc_c.exception_to_result
.Parameters: force_result (bool) – force the exception to be interpreted as tcfl.tc.pass_e
,error_e
,failed_e
,tcfl.tc.blocked_e
, orskip_e
; note there is also translation that can be done fromtcfl.tc.tc_c.exception_to_result
.
-
static
from_exception_cpe
(tc, e, result_e=<class 'tcfl.tc.error_e'>)¶
-
static
from_exception
(fn)¶ Call a phase function to translate exceptions into
tcfl.tc.result_c
return codes.Passes through the return code, unless it is None, in which case we just return result_c(1, 0, 0, 0, 0)
Note this function prints some more extra detail in case of fail/block/skip. it.
-
-
class
tcfl.tc.
tc_logadapter_c
(logger, extra)¶ Logging adapter to prefix test case’s current BSP model, bsp and target name.
Initialize the adapter with a logger and a dict-like object which provides contextual information. This constructor signature allows easy stacking of LoggerAdapters, if so desired.
You can effectively pass keyword arguments as shown in the following example:
adapter = LoggerAdapter(someLogger, dict(p1=v1, p2=”v2”))
-
id
= None¶
-
prefix
= None¶
-
process
(msg, kwargs)¶ Process the logging message and keyword arguments passed in to a logging call to insert contextual information. You can either manipulate the message itself, the keyword args or both. Return the message and kwargs modified (or not) to suit your needs.
Normally, you’ll only need to override this one method in a LoggerAdapter subclass for your specific needs.
-
Add tags to a testcase
Parameters:
-
tcfl.tc.
serially
()¶ Force a testcase method to run serially (vs
concurrently()
).Remember methods that are ran serially are run first and by default are those that
- take more than one target as arguments
- are evaluation methods
-
tcfl.tc.
concurrently
()¶ Force a testcase method to run concurrently after all the serial methods (vs decorator
serially()
).Remember methods that are ran concurrently are run after the serial methods and by default those that:
- are not evaluation methods
- take only one target as argument (if you force two methods that share a target to run in parallel, it is your responsiblity to ensure proper synchronization
-
tcfl.tc.
target_want_add
(_tc, target_want_name, spec, origin, **kwargs)¶ Add a requirement for a target to a testcase instance
Given a testcase instance, add a requirement for it to need a target, filtered with the given specification (spec, which defaults to any), a name and optional arguments in the form of keywords.
This is equivalent to the
tcfl.tc.target()
decorator, which adds the requirement to the class, not to the instance. Please refer to it for the arguments.
-
tcfl.tc.
target
(spec=None, name=None, **kwargs)¶ Add a requirement for a target to a testcase instance
For each target this testcase will need, a filtering specification can be given (spec), a name (or it will default to targetN except for the first one, which is just target) and optional arguments in the form of keywords.
Of those optional arguments, the most important are the app_* arguments. An app_* argument is supplying a source path for an application that has to be (maybe) configured, built and deployed to the target. The decorator will add phase methods to the testcase to configure, build and deploy the application. Now, depending on the application drivers installed, the application can be built or not. FIXME make this explanation better.
Parameters: - spec (str) – specification to filter against the tags the remote target exposes.
- name (str) – name for the target (must not exist already). If none, first declared target is called target, the next target1, then target2 and so on.
- kwargs (dict) –
extra keyword arguments are allowed, which might be used in different ways that are still TBD. Main ones recognized:
- app_NAME = dict(BSP1: PATH1, BSP1: PATH2): specify a list
of paths to apps that shall be built and deployed to the
given BSPs by App builder app_NAME; App builders exist for
Zephyr, Arduino Sketch and other setups, so you don’t
manually have to build your apps. You can create your own
too.
When a board is being run in a multiple BSP mode, each BSP has to be added to an App builder if using the App builder support, otherwise is an error condition.
- app_NAME = PATH: the same, but one for when one BSP is
used; it applies to any BSP in a single BSP model
target.
FIXME: add link to app builders
- app_NAME_options = STRING: Extra options to pass to the APP
- builder FIXME: support also BSP_options?
- mode: how to consider this target at the time of generating
- multiple permutations of targets to run a testcase:
- any: run the testcase on any target that can be found to match the specification
- one-per-type: run the testcase on one target of each type that meets the specification (so if five targets match the specification but they are all of the same type, only one will run it; however, if there are two different types in the set of five, one of each type will run it)
- all: run on every single target that matches the specification
Specially on testcases that require multiple targets, there can be a huge number of permutations on how to run the testcase to ensure maximum coverage of different combinations of targets; some experimentation is needed to decide how to tell TCF to run the testcase and balance how many resources are used.
- app_NAME = dict(BSP1: PATH1, BSP1: PATH2): specify a list
of paths to apps that shall be built and deployed to the
given BSPs by App builder app_NAME; App builders exist for
Zephyr, Arduino Sketch and other setups, so you don’t
manually have to build your apps. You can create your own
too.
-
tcfl.tc.
interconnect
(spec=None, name=None, **kwargs)¶ Add a requirement for an interconnect to a testcase instance
An interconect is a target that binds two or more targets together, maybe provides interconnectivity services (networking or any other); we declare it’s need just like any other target–however, we add the name to an special list so it is easier to handle later.
The arguments are the same as to
tcfl.tc.target()
.
-
class
tcfl.tc.
expectation_c
(target, poll_period, timeout=0, raise_on_timeout=<class 'tcfl.tc.error_e'>, raise_on_found=None, origin=None)¶ Expectations are something we expect to find in the data polled from a source.
An object implementing this interface can be given to
tcfl.tc.tc_c.expect()
as something to expect, which can be, for example:- text in a serial console output
- templates in an image capture
- audio in an audio capture
- network data in a network data capture
- …
when what is being expected is found,
tcfl.tc.tc_c.expect()
can return data about it (implementation specific) can be returned to the caller or exceptions can be raised (eg: if we see an error), or if not found, timeout exceptions can be raised.See
tcfl.tc.tc_c.expect()
for more details and FIXME for implementation examples.Note
the
poll()
anddetect()
methods will be called in a loop until all the expectations have been detected.It is recommended that internal state is only saved in the buffers and buffers_poll storage areas provided (vs storing inside the object).
Parameters: - target (tcfl.tc.target_c) – target on which this expectation is operating.
- poll_period (float) – how often this data needs to be polled in seconds (default 1s).
- timeout (int) – maximum time to wait for this expectation; raises an exception of type raise_on_timeout if the timeout is exceeded. If zero (default), no timeout is raised, which effectively treats an expectation as optional or along with raise_on_found above, to raise an exception if an expectation is to be associated with an error condition.
- raise_on_timeout (tcfl.tc.exception) – (optional) a type
(not an instance) to throw when not found before the
timeout; a subclass of
tcfl.tc.exception
. - raise_on_found (tcfl.tc.exception) –
an instance (not a type) to throw when found; this is useful to implement errors, such as if I see this image in the screen, bail out:
>>> self.expect("wait for boot", >>> crash = image_on_screenshot( >>> target, 'screen', 'icon-crash.png', >>> raise_on_found = tcfl.tc.error_e("Crash found"), >>> timeout = 0), >>> login_prompt = image_on_screenshot( >>> target, 'screen', 'canary-login-prompt.png', >>> timeout = 4), >>> )
Note you need to tell it also zero timeout, otherwise it will complain if it didn’t find it.
The exception’s attachments will be updated with the dictionary of data returned by the expectation’s
detect()
. - origin (str) –
(optional) when reporting information about this expectation, what origin shall it list, eg:
- None (default) to get the current caller
- commonl.origin_get(2) also to get the current caller
- commonl.origin_get(1) also to get the current function
or something as:
>>> "somefilename:43"
-
poll_context
()¶ Return a string that uniquely identifies the polling source for this expectation so multiple expectations that are polling from the same place don’t poll repeatedly.
For example, if we are looking for multiple image templates in a screenshot, it does not make sense to take one screenshot per image. It can take one screenshot and look for the images in the same place.
Thus:
if we are polling from target with role target.want_name from it’s screen capturer called VGA, our context becomes:
>>> return '%s-%s' % (self.target.want_name, "VGA")
so it follows that for a generic expectation from a screenshot capturer stored in self.capturer:
>>> return '%s-%s' % (self.target.want_name, self.capturer)
for a serial console, it would become:
>>> return '%s-%s' % (self.target.want_name, self.console_name)
-
poll
(testcase, run_name, buffers_poll)¶ Poll a given expectation for new data from their data source
The expect engine will call this from
tcfl.tc.tc_c.expect()
periodically to get data where to detect what we are expecting. This data could be serial console output, video output, screenshots, network data, anything.The implementation of this interface will store the data (append, replace, depending on the nature of it) in buffers_poll.
For example, a serial console reader might read from the serial console and append to a file; a screenshot capturer might capture the screenshot and put it in a file and make the file name available in buffers_poll[‘filename’].
Note that when we are working with multiple expectations, if a number of them share the same data source (as determined by
poll_context()
), only one poll per each will be done and they will be expected to share the polled data stored in buffers_poll.Parameters: - testcase (tcfl.tc.tc_c) – testcase for which we are polling.
- run_name (str) – name of this run of
tcfl.tc.tc_c.expect()
–they are always different. - buffers_poll (dict) – dictionary where we can store state
for this poll so it can be shared between calls. Detection
methods that use the same poling source (as given by
poll_context()
) will all be given the same storage space.
-
detect
(testcase, run_name, buffers_poll, buffers)¶ Look for what is being expected in the polled data
After the
tcfl.tc.tc_c.expect()
has polled data (withpoll()
above) and stored it in buffers_poll, this function is called to detect what we are expecting in that data.Note the form of the data is completely specific to this expectation object. It can be data saved into the buffers_poll dictionary or that can be referring to a file in the filesystem. See FIXME examples.
For example, a serial console detector might take the data polled by
poll()
, load it and look for a string in there.Parameters: - testcase (tcfl.tc.tc_c) – testcase for which we are detecting.
- run_name (str) – name of this run of
tcfl.tc.tc_c.expect()
–they are always different. - buffers_poll (dict) – dictionary where the polled data has
is available. Note Detection methods that use the same
poling source (as given by
poll_context()
) will all be given the same storage space. as perpoll()
above. - buffers (dict) – dictionary available exclusively to this expectation object to keep data from run to run.
Returns: information about the detection; if None, this means the detection process didn’t find what is being looked for and the detection process will continue.
If not None is returned, whatever it is, it is considered what was expected has been found.
tcfl.tc.tc_c.expect()
will save this in a dictionary of results specific to each expectation object that will be returned to the user as the return value oftcfl.tc.tc_c.expect()
.As well, if a raise_on_found exception was given, these fields are added to the attachments.
-
flush
(testcase, run_name, buffers_poll, buffers, results)¶ Generate collateral for this expectation
This is called by
tcfl.tc.tc_c.expect()
when all the expectations are completed and can be used to for example, add marks to an image indicating where a template or icon was detected.Note different expectations might be creating collateral from the same source, on which case you need to pile on (eg: adding multiple detectio marks to the same image)
Collateral files shall be generated with name
tcfl.tc.tc_c.report_file_prefix
such as:>>> collateral_filename = testcase.report_file_prefix + "something"
will generate filename report-RUNID:HASHID.something; thus, when multiple testcases are executed in parallel, they will not override each other’s collateral.
Parameters: - testcase (tcfl.tc.tc_c) – testcase for which we are detecting.
- run_name (str) – name of this run of
tcfl.tc.tc_c.expect()
–they are always different. - buffers_poll (dict) – dictionary where the polled data has
is available. Note Detection methods that use the same
poling source (as given by
poll_context()
) will all be given the same storage space. as perpoll()
above. - buffers (dict) – dictionary available exclusively to this
expectation object to keep data from run to run. This was
used by
detect()
to store data needed during the detection process. - results (dict) – dictionary of results generated by
detect()
as a result of the detection process.
-
on_timeout
(run_name, poll_context, ellapsed, timeout)¶ Perform an action when the expectation times out being found
Called by the innards of the expec engine when the expectation times out; by default, raises a generic exception (as specified during the expectation’s creation); can be overriden to offer a more specific message, etc.
-
class
tcfl.tc.
tc_c
(name, tc_file_path, origin)¶ A testcase, with instructions for configuring, building, deploying, setting up, running, evaluating, tearing down and cleaning up.
Derive this class to create a testcase, implementing the different testcase methods to build, deploy and evaluate if it is considered a pass or a failure:
>>> class sometest(tcfl.tc.tc_c): >>> >>> def eval_device_present(self): >>> if not os.path.exists("/dev/expected_device"): >>> raise tcfl.tc.error_e("Device not connected") >>> >>> def eval_mode_correct(self): >>> s = os.stat("/dev/expected_device"): >>> if s.st_mode & 0x644 == 0: >>> raise tcfl.tc.failed_e("wrong mode")
Note
the class will be ignored as a testcase if its name starts with _base_; this is useful to create common code which will be instantiated in another class without it being confused with a testcase.
Parameters: Note that in the most cases, the three arguments will be the same, as the name of the testcase will be the same as the path where the test case is found and if there is only one testcase per file, the origin is either line 1 or no line.
When a file contains specifies multiple testcases, then they can be created such as:
- name TCFILEPATH#TCCASENAME
- tc_file_path TCFILEPATH
- origin TCFILEPATH:LINENUMBER (matching the line number where the subcase is specified)
this allows a well defined namespace in which cases from multiple files that are run at the same time don’t conflict in name.
The runner will call the testcase methods to evaluate the test; any failure/blockage causes the evaluation to stop and move on to the next testcase:
configure*() for getting source code, configuring a buid, etc ..
build*() for building anything that is needed to run the testcase
deploy*() for deploying the build products or artifacts needed to run the testcase to the diffrent targets
For evaluating:
- setup*() to setup the system/fixture for an evaluation run
- start*() to start/power-on the targets or anything needed for the test case evaluation
- eval*() to actually do evaluation actions
- teardown*() for powering off
As well, any test*() methods will be run similarly, but for each, the sequence called will be setup/start/test/teardown (in contrast to eval methods, where they are run in sequence without calling setup/start/teardown in between).
clean*() for cleaning up (ran only if -L is passed on the command line)
- class_teardown is mostly used for self-testing and debugging,
but are functions called whenever every single testcase of the same class has completed executing.
Methods can take no arguments or the names of one or more targets they will operate with/on. These targets are declared using the
tcfl.tc.target()
(for a normal target) andtcfl.tc.interconnect()
(for a target that interconnects/groups the rest of the targets together).The methods that take no targets will be called sequentially in alphabetical order (not in declaration order!). The methods that take different targets will be called in parallel (to maximize multiple cores, unless decorated with
tcfl.tc.serially()
). Evaluation functions are always called sequentially, except if decorated with :func:The testcase methods use the APIs exported by this class and module:
to report information at the appropiate log level:
reporter_c.report_pass()
,reporter_c.report_fail()
,reporter_c.report_blck()
andreporter_c.report_info()
raise an exception to indicate result of this method:
- pass, raise
tcfl.tc.pass_e
(or simply return) - failed, raise
tcfl.tc.failed_e
, - error, raise
tcfl.tc.error_e
, - blocked, raise
tcfl.tc.blocked_e
; any other uncaught Python exception is also converted to this. - skipped, raise
tcfl.tc.skip_e
- pass, raise
run commands in the local machine with
shcmd_local()
; the command can be formatted with %(KEYWORD)[sd] that will be substituted with values found inkws
.Interact with the remote targets through instances of
target_c
that represent them:via arguments to the method
via
targets
, a dictionary keyed by the names of the targets requested with thetarget()
andinterconnect()
decorators; for example:>>> @tcfl.tc.interconnect() # named "ic" by default >>> @tcfl.tc.target() # named "target1" by default >>> @tcfl.tc.target() # named "target" by default >>> class mytest(tcfl.tc.tc_c): >>> ... >>> >>> def start(self, ic, target, target1): >>> ic.power.cycle() >>> target.power.cycle() >>> target1.power.cycle() >>> >>> def eval_1(self, target): >>> target.expect("Hello world") >>> >>> def eval_2(self): >>> target2 = self.target_group.target("target2") >>> target2.expect("Ready") >>> >>> def eval_3(self): >>> mytarget = self.targets["ic"] >>> ic.expect("targets are online") >>> >>> def teardown(self): >>> for _n, target in reversed(self.targets.iteritems()): >>> target.power.off() >>>
target_c
expose APIs to act on the targets, such as power control, serial console access, image deployment
-
origin
= '/home/inaky/t/master-tcf.git/tcfl/tc.py:7191'¶
-
build_only
= []¶ List of places where we declared this testcase is build only
-
targets
= None¶ Target objects
) in which this testcase is running (keyed by target want name, as given to decoratorstcfl.tc.target()
and func:tcfl.tc.interconnect. Note this maps toself._target_groups_c.targets()
for convenience.
-
result_eval
= None¶ Result of the last evaluation run
When an evaluation is run (setup/start/eval/teardown), this variable reflexts the evaluation status; it is meant to be used during the teardown phase, so for example, in case of failure, the teardown phase might decide to gather information about the current target’s state.
-
result
= None¶ Result of the last run of all phases in this testcase
we might need to look at this in other testcases executed inmediately after (as added with
post_tc_append()
).
-
ts_start
= None¶ time when this testcase was created (and thus all references to it’s inception are done); note in __init_shallow__() we update this for when we assign it to a target group to run.
-
report_file_prefix
= None¶ Report file prefix
When needing to create report file collateral of any kind, prefix it with this so it always shows in the same location for all the collateral related to this testcase:
>>> target.shell.file_copy_from("remotefile", >>> self.report_file_prefix + "remotefile")
will produce LOGDIR/report-RUNID:HASHID.remotefile if –log-dir LOGDIR -i RUNID was provided as command line.
>>> target.capture.get('screen', >>> self.report_file_prefix + "screenshot.png")
will produce LOGDIR/report-RUNID:HASHID.screenshot.png
-
subcases
= None¶ list of subcases this test is asked to execute by the test case runner (see
subtc
)Subcases follow the format NAME/SUBNAME/SUBSUBNAME/SUBSUBSUBNAME…
-
subtc
= None¶ list of subcases this testcase contains
Note this is different to
subcases
in that this is the final list the testcase has collected after doing discovery in the machine and (possibly) examining the execution logs.It is ordered by addition time, so things sub-execute in addition order.
-
parent
= None¶ parent of this testcase (normally used for subcases)
-
do_acquire
= None¶ do we have to actually acquire any targets?
in general (default), the testcases need to acquire the targets where they are going to be executed, but in some cases, they do not.
-
is_static
()¶ Returns True if the testcase is static (needs to targets to execute), False otherwise.
-
hashid_len
= 6¶ Number of characters in the testcase’s hash
The testcase’s HASHID is a unique identifier to identify a testcase the group of test targets where it ran.
This defines the lenght of such hash; before it used 4 to be four but once over 40k testcases are being run, conflicts start to pop up, where more than one testcase/target combo maps to the same hash.
32 ^ 4 = 1048576 unique combinations
32 ^ 6 = 1073741824 unique combinations
6 chars offers a keyspace 1024 times larger with base32 than 4 chars. Base64 increases the amount, but not that much compared to the ease of confusion between caps and non caps.
So it has been raised to 6.
FIXME: add a registry to warn of used ids
-
relpath_to_abs
(path)¶ Given a path relative to the test script’s source, make it absolute.
- @returns string with the absolutized path if relative, the
- same if already absolute
-
shcmd_local
(cmd, origin=None, reporter=None, logfile=None)¶ Run a shell command in the local machine, substituting %(KEYWORD)[sd] with keywords defined by the testcase.
Parameters: origin (str) – (optional) when reporting information about this expectation, what origin shall it list, eg:
- None (default) to get the current caller
- commonl.origin_get(2) also to get the current caller
- commonl.origin_get(1) also to get the current function
or something as:
>>> "somefilename:43"
-
classmethod
file_ignore_add_regex
(regex, origin=None)¶ Add a regex to match a file name to ignore when looking for testcase files
Parameters: - regex (str) – Regular expression to match against the file name (not path)
- origin (str) – [optional] string describing where this regular expression comes from (eg: FILE:LINENO).
-
classmethod
dir_ignore_add_regex
(regex, origin=None)¶ Add a regex to match a directory name to ignore when looking for testcase files
Parameters: - regex (str) – Regular expression to match against the directory name (not path)
- origin (str) – [optional] string describing where this regular expression comes from (eg: FILE:LINENO).
-
classmethod
driver_add
(_cls, origin=None, *args)¶ Add a driver to handle test cases (a subclass of :class:tc_c)
A testcase driver is a subclass of
tcfl.tc.tc_c
which overrides the methods used to locate testcases and implements the different testcase configure/build/evaluation functions.>>> import tcfl.tc >>> class my_tc_driver(tcfl.tc.tc_c) >>> tcfl.tc.tc_c.driver_add(my_tc_driver)
Parameters: - _cls (tcfl.tc.tc_c) – testcase driver
- origin (str) – (optional) origin of this call
-
hook_pre
= []¶ (list of callables) a list of functions to call before starting execution of each test case instance (right before any phases are run)
Usable to do final testcase touch up, adding keywords needed for the site deployment. etc.
Note these will be called as methods in the order in the list, so the first argument will be always be the the testcase instance.
E.g.: in a TCF configuration file .tcf/conf_hook.py you can set:
>>> def _my_hook_fn(tc): >>> # Classify testcases based on category: >>> # - red >>> # - green >>> # - blue >>> # >>> # tc_name keyword has the path of the testcase, which >>> # we are using for the sake of example to categorize; >>> # keywords can be dumped by running `tcf run >>> # /usr/share/examples/test_dump_kws*py. >>> >>> name = tc.kws['tc_name'] >>> categories = set() >>> for category in [ 'red', 'green', 'blue' ]: >>> # if test's path has CATEGORY, add it >>> if category in name: >>> categories.add(category) >>> if not categories: >>> categories.add('uncategorized') >>> tc.kw_set('categories', ",".join(categories)) >>> tc.log.error("DEBUG categories: %s", ",".join(categories)) >>> >>> tcfl.tc.tc_c.hook_pre.append(_my_hook_fn) >>>
Warning
- this is a global variable for all testcases of all classes and instances assigned to run in different targets
- these functions will execute on different threads and processes, so do not use shared data or global variables.
- only add to this list from configuration files, never from testcases or testcase driver code.
-
type_map
= {}¶ (dict) a dictionary to translate target type names, from TYPE[:BSP] to another name to use when reporting as it is useful/convenient to your application (eg: if what you are testing prefers other type names); will be only translated if present. E.g.:
>>> tcfl.tc_c.type_map = { >>> # translate to Zephyr names >>> "arduino-101:x86" = "arduino_101", >>> "arduino-101:arc" = "arduino_101_ss", >>> }
-
exception_to_result
= {<type 'exceptions.AssertionError'>: <class 'tcfl.tc.blocked_e'>}¶ Map exception types to results
this allows to automaticall map an exception raised automatically and be converted to a type. Any testcase can define their own version of this to decide how to convert exceptions from the default of them being considered blockage to skip, fail or pass
>>> class _test(tcfl.tc.tc_c): >>> def configure_exceptions(self): >>> self.exception_to_result[OSError] = tcfl.tc.error_e
-
eval_repeat
= 1¶ How many times do we repeat the evaluation (for stress/MTBF)
-
eval_count
= 0¶ Which evaluation are we currently running (out of
eval_repeat
)
-
testcase_patchers
= []¶ List of callables that will be executed when a testcase is identified; these can modify as needed the testcase (eg: scanning for tags)
-
runid
= None¶
-
runid_visible
= ''¶
-
tmpdir
= '/tmp/tcf.run-CTpFVg'¶ temporary directory where testcases can drop things; this will be specific to each testcase instance (testcase and target group where it runs).
-
buffers
= None¶ temporary directory where to store information (serial console, whatever) that will be captured on each different evaluation; on each invocation of the evaluation, a new buffer dir will be allocated and code that captures things from the target will store captures in there.
-
jobs
= 1¶ Number of testcases running on targets
-
rt_all
= None¶
-
release
= True¶
-
report_mk_prefix
()¶ Update the prefix we use for the logging/reports when some parameter changes.
-
target_group
¶ Group of targets this testcase is being ran on
-
tag_set
(tagname, value=None, origin=None)¶ Set a testcase tag.
Parameters: Note that there are a few tags that have speciall conventions:
component/COMPONENTNAME is a tag with value COMPONENTNAME and it is used to classify the testcases by component. Multiple tags like this might exist if the testcase belongs to multiple components. Note it should be a single word.
TCF will create a tag components with value COMPONENTNAME1 COMPONENTNAME2 … (space separated list of components) which shall match the component/COMPONENTx name contains the name of the testcase after testcase instantiation.
Set multiple testcase tags.
Parameters: Same notes as for
tag_set()
apply
-
kw_set
(key, value, origin=None)¶ Set a testcase’s keyword and value
Parameters:
-
kw_unset
(kw)¶ Unset a string keyword for later substitution in commands
Parameters: kw (str) – keyword name
-
kws_set
(d, origin=None)¶ Set a bunch of testcase’s keywords and values
Parameters: d (dict) – A dictionary of keywords and values
-
expect_global_append
(exp, skip_duplicate=False)¶ Append an expectation to the testcase global expectation list
Refer to
expect()
for more information
-
expect_global_remove
(exp)¶ Remove an expectation from the testcase global expectation list
Refer to
expect()
for more information
-
expect_tls_append
(exp)¶ Append an expectation to the thread-specific expectation list
Refer to
expect()
for more information
-
expect_tls_remove
(exp)¶ Remove an expectation from the testcase global expectation list
Refer to
expect()
for more information
-
expect
(*exps_args, **exps_kws)¶ Wait for a list of things we expect to happen
This is a generalization of the pattern expect this string in a serial console where we can wait, in the same loop for many things (expectations) from multiple sources such as serial console, screen shots, network captures, audio captures, etc…
Each expectation is an object that implements the
expectation_c
interface which indicates how to:- poll from a data source
- detect what we are expecting in the polled data
- generate collateral for said detected data
This function will enter into a loop, polling the different expectation data sources according to the poll periods they establish, then detecting data and reporting the results back to the user and raising exceptions if so the user indicates want (eg: raise an exception if timeout looking for a shell prompt, or raise an exception if a kernel panic string is found).
For example:
>>> self.expect( >>> name = "waiting for desktop to boot", >>> timeout = 30, >>> text_on_console(target, "Kernel Panic", >>> name = "kernel panic watchdog", >>> raise_on_found = tcfl.tc.error_e( >>> "Kernel Panic found!"), >>> image_on_screenshot(target, 'screen', 'icon-power.png'), >>> config_button = image_on_screenshot(target, 'screen', >>> 'icon-config.png') >>> )
The first expectation will be called kernel panic watchdog and will raise an exception if the console print a (oversimplified for the sake of the example) kernel panic message. If not found, nothing happens.
The second will default to be called whatever the
image_on_screenshot
calls it (icon-power.png), while the second will have it’s name overriden to config_button. These last two will capture an screenshot from the target’s screenshot capturer called screen and the named icons need to be found for the call to be succesful. Otherwise, error exceptions due to timeout will be raised.The list of expectations that will be always scanned is in this order:
- testcase’s global list of expectations (add with
expect_global_append()
) - testcase’s thread specific list of expectations (add with
expect_tls_append()
) - list of expectations in the arguments
Parameters: - exps_args (expectation_c) –
expectation objects which are expected to be self-named (their implementations will assign names or a default will be given). eg:
>>> self.expect(tcfl.tc.tc_c.text_on_console(args..etc), >>> tcfl.tc.tc_c.image_on_screenshot(args..etc))
- exps_kws (expectation_c) –
expectation objects named after the keyword they are assigned to; note the keywords name and timeout are reserved. eg:
>>> self.expect( >>> shell_prompt = tcfl.tc.tc_c.text_on_console(args..etc), >>> firefox_icon = tcfl.tc.tc_c.image_on_screenshot(args..etc) >>> )
- timeout (int) –
Maximum time of seconds to wait for all non-optional expectations to be met.
>>> timeout = 4
- name (str) –
a name for this execution, used for reporting and generation of collateral; it defaults to a test-specific monotonically increasing number shared amongst all the threads running in this testcase. eg:
>>> name = "shell prompt received"
- origin (str) –
(optional) when reporting information about this expectation, what origin shall it list, eg:
- None (default) to get the current caller
- commonl.origin_get(2) also to get the current caller
- commonl.origin_get(1) also to get the current function
or something as:
>>> "somefilename:43"
-
tag_get
(tagname, value_default, origin_default=None)¶ Return a tuple (value, origin) with the value of the tag and where it was defined.
-
target_event
= <threading._Event object>¶
-
assign_timeout
= 1000¶ Maximum time (seconds) to wait to succesfully acquire a set of targets
In heavily contented scenarios or large executions, this becomes oversimplistic and not that useful and shall be delegated to something like Jenkins timing out after so long running things.
In that case set it to over that timeout (eg: 15 hours); it’ll keep trying to assign until killed; in a tcf configuration file, add:
>>> tcfl.tc.tc_c.assign_timeout = 15 * 60 * 60
or even in the testcase itself, before it assigns (in build or config methods)
>>> self.assign_timeout = 15 * 60 * 60
-
targets_active
(*skip_targets)¶ Mark each target this testcase use as being used
This is to be called when operations are being done on the background that the daemon can’t see and thus consider the target active (e.g.: you are copying a big file over SSH)
>>> class mytest(tcfl.tc.tc_c): >>> ... >>> def eval_some(self): >>> ... >>> self.targets_active() >>> ...
If any target is to be skipped, they can be passed as arguments:
>>> @tcfl.tc.interconnect() >>> @tcfl.tc.target() >>> @tcfl.tc.target() >>> class mytest(tcfl.tc.tc_c): >>> ... >>> def eval_some(self, target): >>> ... >>> self.targets_active(target) >>> ...
-
finalize
(result)¶
-
mkticket
()¶
-
post_tc_append
(tc)¶ Append a testcase that shall be executed inmediately after this testcase is done executing in the same target group.
This is a construct that can be used for:
- execute other testcases that have been detected as needed only during runtime
- reporting subtestcases of a main testcase (relying only on
the output of the main testcase execution, such as in
tcfl.tc_zephyr_sanity.tc_zephyr_subsanity_c
.
Parameters: tc (tc_c) – [instance of a] testcase to append; note this testcase will be executed in the same target group as this testcase is being executed. So the testcase has to declare at the same targets (with the same names) or a subset of them. Example:
>>> @tcfl.tc.target("target1") >>> @tcfl.tc.target("target2") >>> @tcfl.tc.target("target3") >>> class some_tc(tcfl.tc.tc_c): >>> ... >>> def eval_something(self, target2): >>> new_tc = another_tc(SOMEPARAMS) >>> self.post_tc_append(new_tc) >>> >>> >>> @tcfl.tc.target("target2") >>> class another_tc(tcfl.tc.tc_c): >>> ... >>> def eval_something(self, target2): >>> self.report_info("I'm running on target2") >>> >>> @tcfl.tc.target("target1") >>> @tcfl.tc.target("target3") >>> class yet_another_tc(tcfl.tc.tc_c): >>> ... >>> def eval_something(self, target1, target3): >>> self.report_info("I'm running on target1 and target3")
-
file_regex
= <_sre.SRE_Pattern object>¶
-
classmethod
is_testcase
(path, from_path, tc_name, subcases_cmdline)¶ Determine if a given file describes one or more testcases and crete them
TCF’s test case discovery engine calls this method for each file that could describe one or more testcases. It will iterate over all the files and paths passed on the command line files and directories, find files and call this function to enquire on each.
This function’s responsibility is then to look at the contents of the file and create one or more objects of type
tcfl.tc.tc_c
which represent the testcases to be executed, returning them in a list.When creating testcase driver, the driver has to create its own version of this function. The default implementation recognizes python files called test_*.py that contain one or more classes that subclass
tcfl.tc.tc_c
.See examples of drivers in:
tcfl.tc_clear_bbt.tc_clear_bbt_c.is_testcase()
tcfl.tc_zephyr_sanity.tc_zephyr_sanity_c.is_testcase()
examples.test_ptest_runner()
(impromptu testcase driver)
note drivers need to be registered with
tcfl.tc.tc_c.driver_add()
; on the other hand, a Python impromptu testcase driver needs no registration, but the test class has to be called _driver.Parameters: - path (str) – path and filename of the file that has to be examined; this is always a regular file (or symlink to it).
- from_path (str) –
source command line argument this file was found on; e.g.: if path is dir1/subdir/file, and the user ran:
$ tcf run somefile dir1/ dir2/
tcf run found this under the second argument and thus:
>>> from_path = "dir1"
- tc_name (str) – testcase name the core has determine based on the path and subcases specified on the command line; the driver can override it, but it is recommended it is kept.
- subcases_cmdline (list(str)) –
list of subcases the user has specified in the command line; e.g.: for:
$ tcf run test_something.py#sub1#sub2
this would be:
>>> subcases_cmdline = [ 'sub1', 'sub2']
Returns: list of testcases found in path, empty if none found or file not recognized / supported.
-
classmethod
find_in_path
(tcs, path, subcases_cmdline)¶ Given a path, scan for test cases and put them in the dictionary @tcs based on filename where found. list of zero or more paths, scan them for files that contain testcase tc information and report them. :param dict tcs: dictionary where to add the test cases found
Parameters: - path (str) – path where to scan for test cases
- subcases (list) – list of subcase names the testcase should consider
Returns: result_c with counts of tests passed/failed (zero, as at this stage we cannot know), blocked (due to error importing) or skipped(due to whichever condition).
-
class_result
= 0 (0 0 0 0 0)¶
-
class
tcfl.tc.
subtc_c
(name, tc_file_path, origin, parent)¶ Helper for reporting sub testcases
This is used to implement a pattern where a testcase reports, when executed, multiple subcases that are always executed. Then the output is parsed and reported as individual testcases.
As well as the parameters in
tcfl.tc.tc_c
, the following parameter is needed:Parameters: parent (tcfl.tc.tc_c) – testcase which is the parent of this testcase. Refer to this simplified example for a usage example.
Note these subcases are just an artifact to report the subcases results individually, so they do not actually need to acquire or physically use the targets.
-
update
(result, summary, output)¶ Update the results this subcase will report
Parameters: - result (tcfl.tc.result_c) – result to be reported
- summary (str) – one liner summary of the execution report
-
eval_50
()¶
-
static
clean
()¶
-
class_result
= 0 (0 0 0 0 0)¶
-
-
tcfl.tc.
find
(args)¶ Discover test cases in a list of paths
-
tcfl.tc.
tc_global
= <tcfl.tc.tc_c object>¶ Global testcase reporter
Used to report top-level progress and messages beyond the actual testcases we have to execute
-
tcfl.tc.
testcases_discover
(tcs_filtered, args)¶
-
tcfl.tc.
argp_setup
(arg_subparsers)¶
8.1.2. Test library (utilities for testcases)¶
Common utilities for test cases
Evaluate the build environment and make sure all it is needed to build Zephyr apps is in place.
If not, return a dictionary defining a skip tag with the reason that can be fed directly to decorator
tcfl.tc.tags()
; usage:>>> import tcfl.tc >>> import qal >>> >>> @tcfl.tc.tags(**qal.zephyr_tests_tags()) >>> class some_test(tcfl.tc.tc_c): >>> ...
-
tcfl.tl.
console_dump_on_failure
(testcase)¶ If a testcase has errored, failed or blocked, dump the consoles of all the targets.
Parameters: testcase (tcfl.tc.tc_c) – testcase whose targets’ consoles we want to dump Usage: in a testcase’s teardown function:
>>> import tcfl.tc >>> import tcfl.tl >>> >>> class some_test(tcfl.tc.tc_c): >>> ... >>> >>> def teardown_SOMETHING(self): >>> tcfl.tl.console_dump_on_failure(self)
-
tcfl.tl.
setup_verify_slip_feature
(zephyr_client, zephyr_server, _ZEPHYR_BASE)¶ The Zephyr kernel we use needs to support CONFIG_SLIP_MAC_ADDR, so if any of the targets needs SLIP support, make sure that feature is Kconfigurable Note we do this after building, because we need the full target’s configuration file.
Parameters: - zephyr_client (tcfl.tc.target_c) – Client Zephyr target
- zephyr_server (tcfl.tc.target_c) – Client Server target
- _ZEPHYR_BASE (str) – Path of Zephyr source code
Usage: in a testcase’s setup methods, before building Zephyr code:
>>> @staticmethod >>> def setup_SOMETHING(zephyr_client, zephyr_server): >>> tcfl.tl.setup_verify_slip_feature(zephyr_client, zephyr_server, tcfl.tl.ZEPHYR_BASE)
Look for a complete example in
../examples/test_network_linux_zephyr_echo.py
.
-
tcfl.tl.
teardown_targets_power_off
(testcase)¶ Power off all the targets used on a testcase.
Parameters: testcase (tcfl.tc.tc_c) – testcase whose targets we are to power off. Usage: in a testcase’s teardown function:
>>> import tcfl.tc >>> import tcfl.tl >>> >>> class some_test(tcfl.tc.tc_c): >>> ... >>> >>> def teardown_SOMETHING(self): >>> tcfl.tl.teardown_targets_power_off(self)
Note this is usually not necessary as the daemon will power off the targets when cleaning them up; usually when a testcase fails, you want to keep them on to be able to inspect them.
-
tcfl.tl.
tcpdump_enable
(ic)¶ Ask an interconnect to capture IP traffic with TCPDUMP
Note this is only possible if the server to which the interconnect is attached has access to it; if the interconnect is based on the :class:vlan_pci driver, it will support it.
Note the interconnect must be power cycled after this for the setting to take effect. Normally you do this in the start method of a multi-target testcase
>>> def start(self, ic, server, client): >>> tcfl.tl.tcpdump_enable(ic) >>> ic.power.cycle() >>> ...
-
tcfl.tl.
tcpdump_collect
(ic, filename=None)¶ Collects from an interconnect target the tcpdump capture
Parameters: - ic (tcfl.tc.target_c) – interconnect target
- filename (str) – (optional) name of the local file where to copy the tcpdump data to; defaults to report-RUNID:HASHID-REP.tcpdump (where REP is the repetition count)
-
tcfl.tl.
linux_os_release_get
(target, prefix='')¶ Return in a dictionary the contents of a file /etc/os-release (if it exists)
-
tcfl.tl.
linux_ssh_root_nopwd
(target, prefix='')¶ Configure a SSH deamon to allow login as root with no passwords
In a script:
>>> tcfl.tl.linux_ssh_root_nopwd(target) >>> target.shell.run("systemctl restart sshd")
wait for sshd to be fully ready; it is a hack
>>> target.shell.run( # wait for sshd to fully restart >>> # this assumes BASH >>> "while ! exec 3<>/dev/tcp/localhost/22; do" >>> " sleep 1s; done", timeout = 10)
why not nc? easy and simple; not default installed in most distros
why not curl? most distros have it installed; if SSH is replying with the SSH-2.0 string, then likely the daemon is ready
Recent versions of curl now check for HTTP headers, so can’t be really used for this
why not plain ssh? because that might fail by many other reasons, but you can check the debug in ssh -v messages for a debug1: Remote protocol version string; output is harder to keep under control and curl is kinda faster, but:
$ ssh -v localhost 2>&1 -t echo | fgrep -q 'debug1: Remote protocol version'
is a valid test
why not netstat? for example:
$ while ! netstat -antp | grep -q '^tcp.*:22.*LISTEN.*sshd'; do sleep 1s; done
- netstat is not always available, when available, that is also
a valid test
Things you can do after this:
switch over to an SSH console if configured (they are faster and depending on the HW, more reliable):
>>> target.console.setup_preferred()
-
tcfl.tl.
deploy_linux_ssh_root_nopwd
(_ic, target, _kws)¶
-
tcfl.tl.
linux_ipv4_addr_get_from_console
(target, ifname)¶ Get the IPv4 address of a Linux Interface from the Linux shell using the ip addr show command.
Parameters: - target (tcfl.tc.target_c) – target on which to find the IPv4 address.
- ifname (str) – name of the interface for which we want to find the IPv4 address.
Raises: tcfl.tc.error_e – if it cannot find the IP address.
Example:
>>> import tcfl.tl >>> ... >>> >>> @tcfl.tc.interconnect("ipv4_addr") >>> @tcfl.tc.target("pos_capable") >>> class my_test(tcfl.tc.tc_c): >>> ... >>> def eval(self, tc, target): >>> ... >>> ip4 = tcfl.tl.linux_ipv4_addr_get_from_console(target, "eth0") >>> ip4_config = target.addr_get(ic, "ipv4") >>> if ip4 != ip4_config: >>> raise tcfl.tc.failed_e( >>> "assigned IPv4 addr %s is different than" >>> " expected from configuration %s" % (ip4, ip4_config))
-
tcfl.tl.
sh_export_proxy
(ic, target)¶ If the interconnect ic defines a proxy environment, issue a shell command in target to export environment variables that configure it:
>>> class test(tcfl.tc.tc_c): >>> >>> def eval_some(self, ic, target): >>> ... >>> tcfl.tl.sh_export_proxy(ic, target)
would yield a command such as:
$ export http_proxy=http://192.168.98.1:8888 https_proxy=http://192.168.98.1:8888 no_proxy=127.0.0.1,192.168.98.1/24,fc00::62:1/112 HTTP_PROXY=$http_proxy HTTPS_PROXY=$https_proxy NO_PROXY=$no_proxy
being executed in the target
-
tcfl.tl.
linux_wait_online
(ic, target, loops=20, wait_s=0.5)¶ Wait on the serial console until the system is assigned an IP
We make the assumption that once the system is assigned the IP that is expected on the configuration, the system has upstream access and thus is online.
-
tcfl.tl.
linux_rsync_cache_lru_cleanup
(target, path, max_kbytes)¶ Cleanup an LRU rsync cache in a path in the target
An LRU rsync cache is a file tree which is used as an accelerator to rsync trees in to the target for the POS deployment system;
When it grows too big, we need to purge the files/dirs that were uploaded longest ago (as this indicates when it was the last time they were used). For that we use the mtime and we sort by it.
Note this is quite naive, since we can’t really calculate well the space occupied by directories, which adds to the total…
So it sorts by reverse mtime (newest first) and iterates over the list until the accumulated size is more than max_kbytes; then it starts removing files.
-
tcfl.tl.
swupd_bundle_add_timeouts
= {'LyX': 500, 'R-rstudio': 1200, 'big-data-basic': 800, 'c-basic': 500, 'computer-vision-basic': 800, 'container-virt': 800, 'containers-basic-dev': 1200, 'database-basic-dev': 800, 'desktop': 480, 'desktop-autostart': 480, 'desktop-dev': 2500, 'desktop-kde-apps': 800, 'devpkg-clutter-gst': 800, 'devpkg-gnome-online-accounts': 800, 'devpkg-gnome-panel': 800, 'devpkg-nautilus': 800, 'devpkg-opencv': 800, 'education': 800, 'education-primary': 800, 'game-dev': 6000, 'games': 800, 'java-basic': 1600, 'java11-basic': 1600, 'java12-basic': 1600, 'java13-basic': 1600, 'java9-basic': 1600, 'machine-learning-basic': 1200, 'machine-learning-tensorflow': 800, 'machine-learning-web-ui': 1200, 'mail-utils-dev ': 1000, 'maker-cnc': 800, 'maker-gis': 800, 'network-basic-dev': 1200, 'openstack-common': 800, 'os-clr-on-clr': 8000, 'os-clr-on-clr-dev': 8000, 'os-core-dev': 800, 'os-testsuite': 1000, 'os-testsuite-phoronix': 2000, 'os-testsuite-phoronix-desktop': 1000, 'os-testsuite-phoronix-server': 1000, 'os-util-gui': 800, 'os-utils-gui-dev': 6000, 'python-basic-dev': 800, 'qt-basic-dev': 2400, 'service-os-dev': 800, 'storage-cluster': 800, 'storage-util-dev': 800, 'storage-utils-dev': 1000, 'supertuxkart': 800, 'sysadmin-basic-dev': 1000, 'texlive': 1000}¶ Timeouts for adding different, big size bundles
To add to this configuration, specify in a client configuration file or on a test script:
>>> tcfl.tl.swupd_bundle_add_timeouts['BUNDLENAME'] = TIMEOUT
note timeout for adding a bundle defaults to 240 seconds.
-
tcfl.tl.
swupd_bundle_add
(ic, target, bundle_list, debug=None, url=None, wait_online=True, set_proxy=True, fix_time=None, add_timeout=None, become_root=False)¶ Install bundles into a Clear distribution
This is a helper that install a list of bundles into a Clear distribution taking care of a lot of the hard work.
While it is preferrable to have an open call to swupd bundle-add and it should be as simple as that, we have found we had to repeatedly take manual care of many issues and thus this helper was born. It will take take care of:
wait for network connectivity [convenience]
setup proxy variables [convenience]
set swupd URL from where to download [convenience]
fix system’s time for SSL certification (in broken HW)
retry bundle-add calls when they fail due: - random network issues - issues such as:
Error: cannot acquire lock file. Another swupd process is already running (possibly auto-update)
all retryable after a back-off wait.
Parameters: - ic (tcfl.tc.target_c) – interconnect the target uses for network connectivity
- target (tcfl.tc.target_c) – target on which to operate
- bundle_list – name of the bundle to add or list of them; note they will be added each in a separate bundle-add command
- debug (bool) – (optional) run bundle-add with
--debug--
; if None, defaults to environment SWUPD_DEBUG being defined to any value. - url (str) – (optional) set the given url for the swupd’s repository with swupd mirror; if None, defaults to environment SWUPD_URL if defined, otherwise leaves the system’s default setting.
- wait_online (bool) – (optional) waits for the system to have
network connectivity (with
tcfl.tl.linux_wait_online()
); defaults to True. - set_proxy (bool) – (optional) sets the proxy environment with
tcfl.tl.sh_export_proxy()
if the interconnect exports proxy information; defaults to True. - fix_time (bool) – (optional) fixes the system’s time if True to the client’s time.; if None, defaults to environment SWUPD_FIX_TIME if defined, otherwise False.
- add_timeout (int) – (optional) timeout to set to wait for the
bundle-add to complete; defaults to whatever is configured in
the
tcfl.tl.swupd_bundle_add_timeouts
or the the default of 240 seconds. - become_root (bool) –
(optional) if True run the command as super user using su (defaults to False). To be used when the script has the console logged in as non-root.
This uses su vs sudo as some installations will not install sudo for security reasons.
Note this function assumes su is configured to work without asking any passwords. For that, PAM module pam_unix.so has to be configured to include the option nullok in target’s files such as:
- /etc/pam.d/common-auth
- /usr/share/pam.d/su
tcf-image-setup.sh
will do this for you if using it to set images.
8.1.3. Provisioning/deploying/flashing PC-class devices with a Provisioning OS¶
8.1.3.1. Core Provisioning OS functionality¶
This module provides tools to image devices with a Provisioning OS.
The general operation mode for this is instructing the device to boot the Provisioning OS; at this point, the test script (or via the tcf client line) can interact with the POS over the serial console.
Then the device can be partitioned, formatted, etc with general Linux
command line. As well, we can provide an rsync server
to provide OS images that can be flashed
Booting to POS can be accomplished:
- by network boot and root over NFS
- by a special boot device pre-configured to always boot POS
- any other
Server side modules used actively by this system:
- DHCP server
ttbl.dhcp
: provides dynamic IP address assignment; it can be configured so a pre-configured IP address is always assigned to a target and will provide also PXE/TFTP boot services to boot into POS mode (working in conjunction with a HTTP, TFTP and NFS servers). - rsync server
ttbl.rsync
: provides access to images to rsync into partitions (which is way faster than some other imaging methods when done over a 1Gbps link). - port redirector
ttbl.socat
: not strictly needed for POS, but useful to redirect ports out of the NUT to the greater Internet. This comes handy if as part of the testing external software has to be installed or external services acccessed.
Note installation in the server side is needed, as described in POS setup.
-
tcfl.pos.
image_spec_to_tuple
(i)¶
-
tcfl.pos.
image_list_from_rsync_output
(output)¶
-
tcfl.pos.
image_select_best
(image, available_images, target)¶
-
tcfl.pos.
target_power_cycle_to_pos_pxe
(target)¶
-
tcfl.pos.
target_power_cycle_to_normal_pxe
(target)¶
-
tcfl.pos.
persistent_tcf_d
= '/persistent.tcf.d'¶ Name of the directory created in the target’s root filesystem to cache test content
This is maintained by the provisioning process, althought it might be cleaned up to make room.
-
tcfl.pos.
mk_persistent_tcf_d
(target, subdirs=None)¶
-
tcfl.pos.
deploy_linux_kernel
(ic, target, _kws)¶ Deploy a linux kernel tree in the local machine to the target’s root filesystem (example).
A Linux kernel can be built and installed in a separate root directory in the following form:
- ROOTDIR/boot/* - ROOTDIR/lib/modules/*
all those will be rsync’ed to the target’s /boot and /lib/modules (caching on the target’s persistent rootfs area for performance) after flashing the OS image. Thus, it will overwrite whatever kernels where in there.
The target’s /boot/EFI directories will be kept, so that the bootloader configuration can pull the information to configure the new kernel using the existing options.
Build the Linux kernel from a linux source directory to a build directory:
$ mkdir -p build $ cp CONFIGFILE build/.config $ make -C PATH/TO/SRC/linux O=build oldconfig $ make -C build all
(or your favourite configuration and build mechanism), now it can be installed into the root directory:
$ mkdir -p root $ make -C build INSTALLKERNEL=ignoreme INSTALL_PATH=root/boot INSTALL_MOD_PATH=root install modules_install
The root directory can now be given to
target.pos.deploy_image
as:>>> target.deploy_linux_kernel_tree = ROOTDIR >>> target.pos.deploy_image(ic, IMAGENAME, >>> extra_deploy_fns = [ tcfl.pos.deploy_linux_kernel ])
or if using the
tcfl.pos.tc_pos_base
test class template, it can be done such as:>>> class _test(tcfl.pos.tc_pos_base): >>> ... >>> >>> def deploy_00(self, ic, target): >>> rootdir = ROOTDIR >>> target.deploy_linux_kernel_tree = rootdir >>> self.deploy_image_args = dict(extra_deploy_fns = [ >>> tcfl.pos.deploy_linux_kernel ])
ROOTDIR can be hardcoded, but remember if given relative, it is relative to the directory where tcf run was executed from, not where the testcase source is.
Low level details
When the target’s image has been flashed in place,
tcfl.pos.deploy_image
is asked to call this function.The client will rsync the tree from the local machine to the persistent space using
target.pos.rsync
, which also caches it in a persistent area to speed up multiple transfers. From there it will be rsynced to its final location.
-
tcfl.pos.
capability_fns
= {'boot_config': {'uefi': <function boot_config_multiroot at 0x7f94dac48250>}, 'boot_config_fix': {'uefi': <function boot_config_fix at 0x7f94dac482d0>}, 'boot_to_normal': {'pxe': <function target_power_cycle_to_normal_pxe at 0x7f94dad247d0>}, 'boot_to_pos': {'pxe': <function target_power_cycle_to_pos_pxe at 0x7f94dad24750>}, 'mount_fs': {'multiroot': <function mount_fs at 0x7f94dac40050>}}¶ Functions to boot a target into POS
Different target drivers can be loaded and will add members to these dictionaries to extend the abilities of the core system to put targets in Provisioning OS mode.
This then allows a single test script to work with multiple target types without having to worry about details.
-
tcfl.pos.
capability_register
(capability, value, fns)¶
-
class
tcfl.pos.
extension
(target)¶ Extension to
tcfl.tc.target_c
to handle Provisioning OS capabilities.-
cap_fn_get
(capability, default=None)¶ Return a target’s POS capability.
Parameters: - capability (str) – name of the capability, as defined in the target’s tag *pos_capable*.
- default (str) – (optional) default to use if not specified; DO NOT USE! WILL BE DEPRECATED!
-
boot_to_pos
(pos_prompt=None, timeout=60, boot_to_pos_fn=None)¶
-
boot_normal
(boot_to_normal_fn=None)¶ Power cycle the target (if neeed) and boot to normal OS (vs booting to the Provisioning OS).
-
mount_fs
(image, boot_dev)¶ Mount the target’s filesystems in /mnt
When completed, this function has (maybe) formatted/reformatted and mounted all of the target’s filesystems starting in /mnt.
For example, if the final system would have filesystems /boot, / and /home, this function would mount them on:
- / on /mnt/
- /boot on /mnt/boot
- /home on /mnt/home
This allows
deploy_image()
to rysnc content into the final system.Parameters:
-
rsyncd_start
(ic)¶ Start an rsync server on a target running Provisioning OS
This can be used to receive deployment files from any location needed to execute later in the target. The server is attached to the
/mnt
directory and the target is upposed to mount the destination filesystems there.This is usually called automatically for the user by the likes of
deploy_image()
and others.It will create a tunnel from the server to the target’s port where the rsync daemon is listening. A client can then connect to the server’s port to stream data over the rsync protocol. The server address and port will be stored in the target’s keywords rsync_port and rsync_server and thus can be accessed with:
>>> print target.kws['rsync_server'], target.kws['rsync_port']
Parameters: ic (tcfl.tc.target_c) – interconnect (network) to which the target is connected.
-
rsync
(src=None, dst=None, persistent_name=None, persistent_dir='/persistent.tcf.d', path_append='/.', rsync_extra='', skip_local=False)¶ rsync data from the local machine to a target
The local machine is the machine executing the test script (where tcf run was called).
This function will first rsync data to a location in the target (persistent storage
/persistent.tcd.d
) that will not be overriden when flashing images. Then it will rsync it from there to the final location.Note this cache directory can accumulate and grow too big;
target.pos.deploy_image
will cap it to a top size by removing the oldest files.This allows the content to be cached in between testcase execution that reimages the target. Thus, the first run, the whole source tree is transferred to the persistent area, but subsequent runs will already find it there even when if the OS image has been reflashed (as the reflashing will not touch the persistent area). Of course this assumes the previous executions didn’t wipe the persistent area or the whole disk was not corrupted.
This function can be used, for example, when wanting to deploy extra data to the target when using
deploy_image()
:>>> @tcfl.tc.interconnect("ipv4_addr") >>> @tcfl.tc.target("pos_capable") >>> class _test(tcfl.tc.tc_c) >>> ... >>> >>> @staticmethod >>> def _deploy_mygittree(_ic, target, _kws): >>> tcfl.pos.rsync(os.path.expanduser("~/somegittree.git"), >>> dst = '/opt/somegittree.git') >>> >>> def deploy(self, ic, target): >>> ic.power.on() >>> target.pos.deploy_image( >>> ic, "fedora::29", >>> extra_deploy_fns = [ self._deploy_mygittree ]) >>> >>> ...
In this example, the user has a cloned git tree in
~/somegittree.git
that has to be flashed to the target into/opt/somegittree.git
after ensuring the root file system is flashed with Fedora 29.deploy_image()
will start the rsync server and then call _deploy_mygittree() which will usetarget.pos.rsync
to rsync from the user’s machine to the target’s persistent location (in/mnt/persistent.tcf.d/somegittree.git
) and from there to the final location of/mnt/opt/somegittree.git
. When the system boots it will be of course in/opt/somegittree.git
Because
target.pos.rsyncd_start
has been called already, we have now these keywords available that allows to know where to connect to.>>> target.kws['rsync_server'] >>> target.kws['rsync_port']
as setup by calling
target.pos.rsyncd_start
on the target. Functions such astarget.pos.deploy_image
do this for you.Parameters: - src (str) – (optional) source tree/file in the local machine to be copied to the target’s persistent area. If not specified, nothing is copied to the persistent area.
- dst (str) – (optional) destination tree/file in the target machine; if specified, the file is copied from the persistent area to the final destination. If not specified, nothing is copied from the persistent area to the final destination.
- persistent_name (str) – (optional) name for the file/tree in the persistent area; defaults to the basename of the source file specification.
- persistent_dir (str) – (optional) name for the persistent
area in the target, defaults to
persistent_tcf_d
.
-
rsync_np
(src, dst, option_delete=False, path_append='/.', rsync_extra='')¶ rsync data from the local machine to a target
The local machine is the machine executing the test script (where tcf run was called).
Unlike
rsync()
, this function will rsync data straight from the local machine to the target’s final destination, but without using the persistent storage/persistent.tcd.d
.This function can be used, for example, to flash a whole distribution from the target–however, because that would be very slow,
deploy_image()
is used to transfer a distro as a seed from the server (faster) and then from the local machine, just whatever changed (eg: some changes being tested in some package):>>> @tcfl.tc.interconnect("ipv4_addr") >>> @tcfl.tc.target("pos_capable") >>> class _test(tcfl.tc.tc_c) >>> ... >>> >>> def deploy_tree(_ic, target, _kws): >>> target.pos.rsync_np("/SOME/DIR/my-fedora-29", "/") >>> >>> def deploy(self, ic, target): >>> ic.power.on() >>> target.pos.deploy_image( >>> ic, "fedora::29", >>> extra_deploy_fns = [ self.deploy_tree ]) >>> >>> ...
In this example, the target will be flashed to whatever fedora 29 is available in the server and then
/SOME/DIR/my-fedora-29
will be rsynced on top.Parameters: - src (str) – (optional) source tree/file in the local machine to be copied to the target’s persistent area. If not specified, nothing is copied to the persistent area.
- dst (str) – (optional) destination tree/file in the target machine; if specified, the file is copied from the persistent area to the final destination. If not specified, nothing is copied from the persistent area to the final destination.
- option_delete (bool) – (optional) Add the
--delete
option to delete anything in the target that is not present in the source (%(default)s).
-
rsyncd_stop
()¶ Stop an rsync server on a target running Provisioning OS
A server was started with
target.pos.rsyncd_start
; kill it gracefully.
-
fsinfo_get_block
(name)¶
-
fsinfo_get_child
(child_name)¶
-
fsinfo_get_child_by_partlabel
(blkdev, partlabel)¶
-
fsinfo_read
(boot_partlabel=None, raise_on_not_found=True, timeout=None)¶ Re-read the target’s partition tables, load the information
Internal API for POS drivers
This will load the partition table, ensuring the information is loaded and that there is at least an entry in the partition table for the boot partition of the device the trget’s describes as POS boot device (target’s pos_boot_dev).
Parameters: - boot_partlabel (str) – (optional) label of the partition we need to be able to find while scanning; will retry a few times up to a minute forcing a scan; defaults to nothing (won’t look for it).
- raise_on_not_found (bool) – (optional); raise a blocked exception if the partition label is not found after retrying; default True.
- timeout (int) – (optional) seconds to wait for the partition tables to be re-read; defaults to 30s (some HW needs more than others and there is no way to make a good determination) or whatever is specified in target tag/property pos_partscan_timeout.
-
rootfs_make_room_candidates
= ['/mnt/tmp/', '/mnt/var/tmp/', '/mnt/var/log/', '/mnt/var/cache/', '/mnt/var/lib/systemd', '/mnt/var/lib/spool']¶ List of directories to clean up when trying to make up space in the root filesystem.
Before an image can be flashed, we need some space so rsync can do its job. If there is not enough, we start cleaning directories from files that can be easily ignored or we know are going ot be wiped.
This list can be manipulated to fit the specific use case, for example, from the deploy methods before calling meth:deploy_image:
>>> self.pos.rootfs_make_room_candidates.insert(0, "/mnt/opt")
to this we will add the cache locations from data:cache_locations_per_distro
-
cache_locations_per_distro
= {'clear': ['/var/lib/swupd'], 'fedora': ['/var/lib/rpm'], 'rhel': ['/var/lib/rpm']}¶ Dictionary of locations we cache for each distribution
keyed by the beginning of the distro name, this allows us to respect the space where content has been previousloy downloaded, so future executions don’t have to download it again. This can heavily cut test setup time.
-
deploy_image
(ic, image, boot_dev=None, root_part_dev=None, partitioning_fn=None, extra_deploy_fns=None, pos_prompt=None, timeout=60, timeout_sync=240, target_power_cycle_to_pos=None, boot_config=None)¶ Deploy an image to a target using the Provisioning OS
Parameters: - ic (tcfl.tc.tc_c) – interconnect off which we are booting the
Provisioning OS and to which
target
is connected. - image (str) –
name of an image available in an rsync server specified in the interconnect’s
pos_rsync_server
tag. Each image is specified asIMAGE:SPIN:VERSION:SUBVERSION:ARCH
, e.g:- fedora:workstation:28::x86_64
- clear:live:25550::x86_64
- yocto:core-image-minimal:2.5.1::x86
Note that you can specify a partial image name and the closest match to it will be selected. From the previous example, asking for fedora would auto select fedora:workstation:28::x86_64 assuming the target supports the x86_64 target.
- boot_dev (str) –
(optional) which is the boot device to use, where the boot loader needs to be installed in a boot partition. e.g.:
sda
for /dev/sda ormmcblk01
for /dev/mmcblk01.Defaults to the value of the
pos_boot_dev
tag. - root_part_dev (str) –
(optional) which is the device to use for the root partition. e.g:
mmcblk0p4
for /dev/mmcblk0p4 orhda5
for /dev/hda5.If not specified, the system will pick up one from all the different root partitions that are available, trying to select the one that has the most similar to what we are installing to minimize the install time.
- extra_deploy_fns –
list of functions to call after the image has been deployed. e.g.:
>>> def deploy_linux_kernel(ic, target, kws, kernel_file = None): >>> ...
the function will be passed keywords which contain values found out during this execution
Returns str: name of the image that was deployed (in case it was guessed)
- FIXME:
- increase in property bd.stats.client.sos_boot_failures and bd.stats.client.sos_boot_count (to get a baseline)
- tag bd.stats.last_reset to DATE
Note: you might want the interconnect power cycled
- ic (tcfl.tc.tc_c) – interconnect off which we are booting the
Provisioning OS and to which
-
-
tcfl.pos.
image_seed_match
(lp, goal)¶ Given two image/seed specifications, return the most similar one
>>> lp = { >>> 'part1': 'clear:live:25550::x86-64', >>> 'part2': 'fedora:workstation:28::x86', >>> 'part3': 'rtk::91', >>> 'part4': 'rtk::90', >>> 'part5': 'rtk::114', >>> } >>> _seed_match(lp, "rtk::112") >>> ('part5', 0.933333333333, 'rtk::114')
-
tcfl.pos.
deploy_tree
(_ic, target, _kws)¶ Rsync a local tree to the target after imaging
This is normally given to
target.pos.deploy_image
as:>>> target.deploy_tree_src = SOMELOCALLOCATION >>> target.pos.deploy_image(ic, IMAGENAME, >>> extra_deploy_fns = [ tcfl.pos.deploy_linux_kernel ])
-
tcfl.pos.
deploy_path
(ic, target, _kws, cache=True)¶ Rsync a local tree to the target after imaging
This is normally given to
target.pos.deploy_image
as:>>> target.deploy_path_src = self.kws['srcdir'] + "/collateral/movie.avi" >>> target.deploy_path_dest = "/root" # optional,defaults to / >>> target.pos.deploy_image(ic, IMAGENAME, >>> extra_deploy_fns = [ tcfl.pos.deploy_linux_kernel ])
-
class
tcfl.pos.
tc_pos0_base
(name, tc_file_path, origin)¶ A template for testcases that install an image in a target that can be provisioned with Provisioning OS.
Unlike
tc_pos_base
, this class needs the targets being declared and called ic and target, such as:>>> @tc.interconnect("ipv4_addr") >>> @tc.target('pos_capable') >>> class my_test(tcfl.tl.tc_pos0_base): >>> def eval(self, ic, target): >>> target.shell.run("echo Hello'' World", >>> "Hello World")
Please refer to
tc_pos_base
for more information.-
image_requested
= None¶ Image we want to install in the target
Note this can be specialized in a subclass such as
>>> class my_test(tcfl.tl.tc_pos_base): >>> >>> image_requested = "fedora:desktop:29" >>> >>> def eval(self, ic, target): >>> ...
-
image
= 'image-not-deployed'¶ Once the image was deployed, this will be set with the name of the image that was selected.
-
deploy_image_args
= {}¶
-
login_user
= 'root'¶ Which user shall we login as
-
delay_login
= 0¶ How many seconds to delay before login in once the login prompt is detected
-
deploy_50
(ic, target)¶
-
start_50
(ic, target)¶
-
teardown_50
()¶
-
class_result
= 0 (0 0 0 0 0)¶
-
-
class
tcfl.pos.
tc_pos_base
(name, tc_file_path, origin)¶ A template for testcases that install an image in a target that can be provisioned with Provisioning OS.
This basic template deploys an image specified in the environment variable
IMAGE
or in self.requested_image, power cycles into it and waits for a prompt in the serial console.This forcefully declares this testcase needs:
- a network that supports IPv4 (for provisioning over it)
- a target that supports Provisioning OS
if you want more control over said conditions, use class:tc_pos0_base, for which the targets have to be declared. Also, more knobs are available there.
To use:
>>> class my_test(tcfl.tl.tc_pos_base): >>> def eval(self, ic, target): >>> target.shell.run("echo Hello'' World", >>> "Hello World")
All the methods (deploy, start, teardown) defined in the class are suffixed
_50
, so it is easy to do extra tasks before and after.>>> class my_test(tcfl.tl.tc_pos_base): >>> def start_60(self, ic): >>> ic.release() # we don't need the network after imaging >>> >>> def eval(self, ic, target): >>> target.shell.run("echo Hello'' World", >>> "Hello World")
-
class_result
= 0 (0 0 0 0 0)¶
-
tcfl.pos.
cmdline_pos_capability_list
(args)¶
-
tcfl.pos.
cmdline_setup
(argsp)¶
8.1.3.2. Provisioning OS: bootloader configuration for EFI systems¶
This module provides capabilities to configure the boot of a UEFI system with the Provisioning OS.
One of the top level call is boot_config_multiroot()
which is
called by tcfl.pos.deploy_image
to configure the boot for a target
that just got an image deployed to it using the multiroot methodology.
-
tcfl.pos_uefi.
boot_config_multiroot
(target, boot_dev, image)¶ Configure the target to boot using the multiroot
-
tcfl.pos_uefi.
boot_config_fix
(target)¶
8.1.3.3. Provisioning OS: partitioning schema for multiple root FSs per device¶
The Provisioning OS multiroot methodology partitions a system with multiple root filesystems; different OSes are installed in each root so it is fast to switch from one to another to run things in automated fashion.
The key to the operation is that the server maintains a list of OS images available to be rsynced to the target’s filesystem. rsync can copy straight or transmit only the minimum set of needed changes.
This also speeds up deployment of an OS to the root filesystems, as by picking a root filesystem that has already installed one similar to the one to be deployed (eg: a workstation vs a server version), the amount of data to be transfered is greatly reduced.
Like this, the following scenario sorted from more to less data transfer (and thus slowest to fastest operation):
- can install on an empty root filesystem: in this case a full installation is done
- can refresh an existing root fileystem to the destination: some
things might be shared or the same and a partial transfer can be
done; this might be the case when:
- moving from a distro to another
- moving from one version to another of the same distro
- moving from one spin of one distro to another
- can update an existing root filesystem: in this case very little change is done and we are just verifying nothing was unduly modified.
8.1.3.3.1. Partition Size specification¶
To simplify setup of targets, a string such as “1:4:10:50” is given to denote the sizes of the different partitions:
- 1 GiB for /boot
- 4 GiB for swap
- 10 GiB for scratch (can be used for whatever the script wants, needs to be formated/initialized before use)
- 50 GiB for multiple root partitions (until the disk size is exhausted)
-
tcfl.pos_multiroot.
mount_fs
(target, image, boot_dev)¶ Boots a root filesystem on /mnt
The partition used as a root filesystem is picked up based on the image that is going to be installed; we look for one that has the most similar image already installed and pick that.
Returns: name of the root partition device
8.1.4. Other target interfaces¶
8.1.4.1. Press and release buttons in the target¶
Extension to
tcfl.tc.target_c
to manipulate buttons connected to the target.Buttons can be pressed, released or a sequence of them (eg: press button1, release button2, wait 0.25s, press button 2, wait 1s release button1).
>>> target.buttons.list() >>> target.tunnel.press('button1') >>> target.tunnel.release('button2') >>> target.tunnel.sequence([ >>> ( 'button1', 'press' ), >>> ( 'button2', 'release' ), >>> ( 'wait 1', 0.25 ), >>> ( 'button2', 'press' ), >>> ( 'wait 2', 1 ), >>> ( 'button1', 'release' ), >>> ])
Note that for this interface to work, the target has to expose a buttons interface and expose said buttons (list them). You can use the command line:
$ tcf button-list TARGETNAME
to find the buttons available to a targert and use
button-press
,button-release
andbutton-click
to manipulate from the command line.
8.1.4.2. Capture snapshots or streams of target data, such as screenshots, audio, video, network, etc¶
The capture interface allows to capture screenshots, audio and video streams, network traffic, etc.
This provides an abstract interface to access it as well as means to wait for things to be found when such captures, such as images on screenshots.
-
class
tcfl.target_ext_capture.
extension
(target)¶ When a target supports the capture interface, it’s tcfl.tc.target_c object will expose target.capture where the following calls can be made to capture data from it.
A streaming capturer will start capturing when
start()
is called and stop whenstop_and_get()
is called, bringing the capture file from the server to the machine executing tcf run.A non streaming capturer just takes a snapshot when
get()
is called.You can find available capturers with
list()
or:$ tcf capture-list TARGETNAME vnc0:ready screen:ready video1:not-capturing video0:ready
a ready capturer is capable of taking screenshots only
or:
$ tcf list TARGETNAME | grep capture: capture: vnc0 screen video1 video0
-
start
(capturer)¶ Start capturing the stream with capturer capturer
(if this is not an streaming capturer, nothing happens)
>>> target.capture.start("screen_stream")
Parameters: capturer (str) – capturer to use, as listed in the target’s capture Returns: dictionary of values passed by the server
-
stop_and_get
(capturer, local_filename)¶ If this is a streaming capturer, stop streaming and return the captured data or if no streaming, take a snapshot and return it.
>>> target.capture.stop_and_get("screen_stream", "file.avi") >>> target.capture.get("screen", "file.png") >>> network.capture.get("tcpdump", "file.pcap")
Parameters: Returns: dictionary of values passed by the server
-
stop
(capturer)¶ If this is a streaming capturer, stop streaming and discard the captured content.
>>> target.capture.stop("screen_stream")
Parameters: capturer (str) – capturer to use, as listed in the target’s capture
-
get
(capturer, local_filename)¶ This is the same
stop_and_get()
.
-
list
()¶ List capturers available for this target.
>>> r = target.capture.list() >>> print r >>> {'screen': 'ready', 'audio': 'not-capturing', 'screen_stream': 'capturing'}
Returns: dictionary of capturers and their state
-
image_on_screenshot
(template_image_filename, capturer='screen', in_area=None, merge_similar=0.7, min_width=30, min_height=30, poll_period=3, timeout=130, raise_on_timeout=<class 'tcfl.tc.error_e'>, raise_on_found=None)¶ Returns an object that finds an image/template in an screenshot from the target.
This object is then given to
tcfl.tc.tc_c.expect()
to poll for screenshot until the image is detected:>>> class _test(tcfl.tc.tc_c): >>> ... >>> def eval(self, target): >>> ... >>> r = self.expect( >>> target.capture.image_on_screenshot('icon1.png'), >>> target.capture.image_on_screenshot('icon2.png'))
upon return, r is a dictionary with the detection information for each icon:
>>> { >>> "icon1.png": [ >>> ( >>> 1.0, >>> ( 0.949, 0.005, 0.968, 0.0312 ), >>> # relative (X0, Y0) to (X1, Y1) >>> ( 972, 4, 992, 24) >>> # absolute (X0, Y0) to (X1, Y1) >>> ), >>> ( >>> 0.957, >>> ( 0.948, 0.004, 0.969, 0.031 ), >>> ( 971, 3, 992, 24) >>> ), >>> ], >>> "icon2.png": [ >>> ( >>> 0.915, >>> (0.948, 0.004, 0.970, 0.032 ), >>> (971, 3, 993, 25) >>> ) >>> ] >>> }
This detector’s return values for reach icon are a list of squares where the template was found. On each entry we get a list of:
- the scale of the template
- a square in resolution-independent coordinates; (0,0) being the top left corner, (1, 1) bottom right corner)
- a square in the screen’s capture resolution; (0,0) being the top left corner.
the detector will also produce collateral in the form of screenshots with annotations where the icons were found, named as report-[RUNID]:HASHID.NN[.LABEL].detected.png, where NN is a monotonically increasing number, read more for RUNID, and ref:HASHID <tc_id>).
Parameters: - template_image_filename (str) –
name of the file that contains the image that we will look for in the screenshot. This can be in jpeg, png, gif and other formats.
If the filename is relative, it is considered to be relative to the file the contains the source file that calls this function.
- capturer (str) –
(optional, default screen) where to capture the screenshot from; this has to be a capture output that supports screenshots in a graphical formatr (PNG, JPEG, etc), eg:
$ tcf capture-list nuc-01A ... hdmi0_screenshot:snapshot:image/png:ready screen:snapshot:image/png:ready ...
any of these two could be used; screen is taken as a default that any target with graphic capture capabilities will provide as a convention.
- in_area –
(optional) bounding box defining a square where the image/template has to be found for it to be considered; it is a very basic mask.
The format is (X0, Y0, X1, Y1), where all numbers are floats from 0 to 1. (0, 0) is the top left corner, (1, 1) the bottom right corner. Eg:
- (0, 0, 0.5, 0.5) the top left 1/4th of the screen
- (0, 0.5, 1, 1) the bottom half of the screen
- (0.5, 0, 1, 1) the right half of the screen
- (0.95, 0, 1, 0.05) a square with 5% side on the top right
- corner of the screen
- merge_similar (float) –
(default 0.7) value from 0 to 1 that indicates how much we consider two detections similar and we merge them into a single one.
0 means two detections don’t overlap at all, 1 means two detections have to be exatly the same. 0.85 would mean that the two detections overlap on 85% of the surface.
- min_width (int) – (optional, default 30) minimum width of the template when scaling.
- min_height (int) – (optional, default 30) minimum height of the template when scaling.
The rest of the arguments are described in
tcfl.tc.expectation_c
.
-
-
tcfl.target_ext_capture.
cmdline_setup
(argsp)¶
8.1.4.3. Raw access to the target’s serial consoles¶
This exposes APIs to interface with the target’s serial consoles and the hookups for accessing them form the command line.
-
class
tcfl.target_ext_console.
expect_text_on_console_c
(text_or_regex, console=None, poll_period=0.25, timeout=30, previous_max=4096, raise_on_timeout=<class 'tcfl.tc.failed_e'>, raise_on_found=None, name=None, target=None)¶ Object that expects to find a string or regex in a target’s serial console.
See parameter description in builder
console.expect_text()
, as this is meant to be used with the expecter engine,tcfl.tc.tc_c.expect()
.-
console
¶ Console this expectation is attached to
Note that if initialized with the default console, we’ll always resolve which one is it–since it is needed to keep track of where to store things.
-
max_size
= 65536¶ Maximum amount of bytes to read on each read iteration in
poll()
; this is so that if a (broken) target is spewing gigabytes of data, we don’t get stuck here just reading from it.
-
poll_context
()¶ Return a string that uniquely identifies the polling source for this expectation so multiple expectations that are polling from the same place don’t poll repeatedly.
For example, if we are looking for multiple image templates in a screenshot, it does not make sense to take one screenshot per image. It can take one screenshot and look for the images in the same place.
Thus:
if we are polling from target with role target.want_name from it’s screen capturer called VGA, our context becomes:
>>> return '%s-%s' % (self.target.want_name, "VGA")
so it follows that for a generic expectation from a screenshot capturer stored in self.capturer:
>>> return '%s-%s' % (self.target.want_name, self.capturer)
for a serial console, it would become:
>>> return '%s-%s' % (self.target.want_name, self.console_name)
-
poll
(testcase, run_name, _buffers_poll)¶ Poll a given expectation for new data from their data source
The expect engine will call this from
tcfl.tc.tc_c.expect()
periodically to get data where to detect what we are expecting. This data could be serial console output, video output, screenshots, network data, anything.The implementation of this interface will store the data (append, replace, depending on the nature of it) in buffers_poll.
For example, a serial console reader might read from the serial console and append to a file; a screenshot capturer might capture the screenshot and put it in a file and make the file name available in buffers_poll[‘filename’].
Note that when we are working with multiple expectations, if a number of them share the same data source (as determined by
poll_context()
), only one poll per each will be done and they will be expected to share the polled data stored in buffers_poll.Parameters: - testcase (tcfl.tc.tc_c) – testcase for which we are polling.
- run_name (str) – name of this run of
tcfl.tc.tc_c.expect()
–they are always different. - buffers_poll (dict) – dictionary where we can store state
for this poll so it can be shared between calls. Detection
methods that use the same poling source (as given by
poll_context()
) will all be given the same storage space.
-
detect
(testcase, run_name, _buffers_poll, buffers)¶ See
expectation_c.detect()
for reference on the argumentsReturns: dictionary of data describing the match, including an interator over the console output
-
on_timeout
(run_name, poll_context, ellapsed, timeout)¶ Perform an action when the expectation times out being found
Called by the innards of the expec engine when the expectation times out; by default, raises a generic exception (as specified during the expectation’s creation); can be overriden to offer a more specific message, etc.
-
flush
(testcase, run_name, buffers_poll, buffers, results)¶ Generate collateral for this expectation
This is called by
tcfl.tc.tc_c.expect()
when all the expectations are completed and can be used to for example, add marks to an image indicating where a template or icon was detected.Note different expectations might be creating collateral from the same source, on which case you need to pile on (eg: adding multiple detectio marks to the same image)
Collateral files shall be generated with name
tcfl.tc.tc_c.report_file_prefix
such as:>>> collateral_filename = testcase.report_file_prefix + "something"
will generate filename report-RUNID:HASHID.something; thus, when multiple testcases are executed in parallel, they will not override each other’s collateral.
Parameters: - testcase (tcfl.tc.tc_c) – testcase for which we are detecting.
- run_name (str) – name of this run of
tcfl.tc.tc_c.expect()
–they are always different. - buffers_poll (dict) – dictionary where the polled data has
is available. Note Detection methods that use the same
poling source (as given by
poll_context()
) will all be given the same storage space. as perpoll()
above. - buffers (dict) – dictionary available exclusively to this
expectation object to keep data from run to run. This was
used by
detect()
to store data needed during the detection process. - results (dict) – dictionary of results generated by
detect()
as a result of the detection process.
-
-
class
tcfl.target_ext_console.
extension
(target)¶ Extension to
tcfl.tc.target_c
to run methods from the console management interface to TTBD targets.Use as:
>>> target.console.read() >>> target.console.write() >>> target.console.setup() >>> target.console.list()
Consoles might be disabled (because for example, the targer has to be on some network for them to be enabled; you can get console specific parameters with:
>>> params = target.console.setup_get()
You can set them up (and these are implementation specific:)
>>> target.console.setup(CONSOLENAME, param1 = val1, param2 = val2...)
Once setup and ready to enable/disable:
>>> target.console.enable() >>> target.console.disable()
You can set the default console with:
>>> target.console.default = NAME
A common pattern is for a system to boot up using a serial console and once it is up, SSH is started and the default console is switched to an SSH based console, faster and more reliable.
The targets are supposed to declare the following consoles:
- default: the one we use by default
- preferred (optional): the one to switch for once done booting, but might console-specific need setup (like SSH server starting, etc)
When the console is set to another default, the property console-default will reflect that. It will be reset upon power-on.
-
default
¶ Return the default console
-
select_preferred
(console=None, shell_setup=True, **console_setup_kwargs)¶ Setup, enable and switch as default to the preferred console
If the target declares a preferred console, then switching to it after setting up whatever is needed (eg: SSH daemons in the target, etc, paramters in the console) usually yields a faster and more reliable console.
If there is no preferred console, then this doesn’t change anything.
Parameters: - console (str) – (optional) console name to make preferred; default to whatever the target declares (by maybe exporting a console called preferred).
- shell_setup –
(optional, default) setup the shell up by disabling command line editing (makes it easier for the automation) and set up hooks that will raise an exception if a shell command fails.
By default calls target.shell.setup(); if False, nothing will be called. No arguments are passed, the function needs to operate on the default console.
The rest of the arguments are passed verbatim to
target.console.setup
to setup the console and are thus console specific.
-
enable
(console=None)¶ Enable a console
Parameters: console (str) – (optional) console to enable; if missing, the default one.
-
disable
(console=None)¶ Disable a console
Parameters: console (str) – (optional) console to disable; if missing, the default one.
-
state
(console=None)¶ Return the given console’s state
Parameters: console (str) – (optional) console to enable; if missing, the default one Returns: True if enabled, False otherwise
-
setup
(console, **parameters)¶ Setup console’s parameters
If no parameters are given, reset to defaults.
List of current parameters can be obtained with
setup_get()
.
-
setup_get
(console)¶ Return a dictionary with current parameters.
-
read
(console=None, offset=0, max_size=0, fd=None)¶ Read data received on the target’s console
Parameters: Returns: data read (or if written to a file descriptor, amount of bytes read)
-
read_full
(console=None, offset=0, max_size=0, fd=None)¶ Like
read()
, reads data received on the target’s console returning also the stream generation and offset at which to read the next time to get new data.Stream generation is a monotonically increasing number that is incrased every time the target is power cycled.
Parameters: Returns: tuple consisting of: - stream generation - stream size after reading - data read (or if written to a file descriptor,
amount of bytes read)
-
size
(console=None)¶ Return the amount of bytes so far read from the console
Parameters: console (str) – (optional) console to read from
-
write
(data, console=None)¶ Write data to a console
Parameters: - data – data to write (string or bytes)
- console (str) – (optional) console to write to
-
list
()¶
-
capture_filename
(console=None)¶ Return the name of the file where this console is being captured to
-
text_capture_file
(console=None)¶ Return a descriptor to the file where this console is being captured to
-
text_poll_context
(console=None)¶ Return the polling context that will be associated with a target’s console.
Parameters: console (str) – (optional) console name or take default
-
text
(*args, **kwargs)¶ Return an object to expect a string or regex in this target’s console. This can be fed to
tcfl.tc.tc_c.expect()
:>>> self.expect( >>> target.console.text(re.compile("DONE.*$"), timeout = 30) >>> )
or for leaving it permanently installed as a hook to, eg, raise an exception if a non-wanted string is found:
>>> testcase.expect_global_append( >>> target.console.text( >>> "Kernel Panic", >>> timeout = 0, poll_period = 1, >>> raise_on_found = tc.error_e("kernel panicked"), >>> ) >>> )
Parameters: (other parameters are the same as described in
tcfl.tc.expectation_c
.)
-
capture_iterator
(console, offset_from=0, offset_to=0)¶ Iterate over the captured contents of the console
expect_text_on_console_c.poll
has created a file where it has written all the contents read from the console; this function is a generator that iterates over it, yielding safe UTF-8 strings.Note these are not reseteable, so to use in attachments with multiple report drivers, use instead a :meth:generator_factory.
Parameters:
-
generator_factory
(console, offset_from=0, offset_to=0)¶ Return a generator factory that creates iterators to dump console’s received data
Parameters:
-
tcfl.target_ext_console.
f_write_retry_eagain
(fd, data)¶
8.1.4.4. Access target’s debugging capabilities¶
-
class
tcfl.target_ext_debug.
extension
(target)¶ Extension to
tcfl.tc.target_c
to run methods form the debuginterface
to targets.Use as:
>>> target.debug.list() >>> target.debug.start() >>> target.debug.stop() >>> target.debug.reset() >>> target.debug.halt() >>> target.debug.reset_halt() >>> target.debug.resume()
etc …
-
list
(components=None)¶ Return a debugging information about each component
Parameters: components (list(str)) – (optional) list of subcomponents for which to report the information (default all)
Returns dict: dictionary keyed by components describing each components debugging status and other information information.
If a component’s value is None, debugging is not started for that component. Otherwise the dictionary will include values keyed by string that are implementation specific, with the common ones documented in
ttbl.debug.impl_c.debug_list()
.
-
start
(components=None)¶ Start debugging support on the target or individual components
Note it might need a power cycle for the change to be effective, depending on the component.
If called before powering on, the target will wait for the debugger to connect before starting the kernel (when possible).
Parameters: components (list(str)) – (optional) list of components whose debugging support shall start (defaults to all)
-
stop
(components=None)¶ Stop debugging support on the target
Note it might need a power cycle for the change to be effective, depending on the component.
Parameters: components (list(str)) – (optional) list of components whose debugging support shall stop (defaults to all)
-
halt
(components=None)¶ Halt the target’s CPUs
Parameters: components (list(str)) – (optional) list of components where to operate (defaults to all)
-
reset
(components=None)¶ Reset the target’s CPUs
Parameters: components (list(str)) – (optional) list of components where to operate (defaults to all)
-
8.1.4.5. Flash the target with fastboot¶
-
class
tcfl.target_ext_fastboot.
extension
(target)¶ Extension to
tcfl.tc.target_c
to run fastboot commands on the target via the server.Use
run()
to execute a command on the target:>>> target.fastboot.run("flash_pos", "partition_boot", >>> "/home/ttbd/partition_boot.pos.img")
a target with the example configuration described in
ttbl.fastboot.interface
would run the command:$ fastboot -s SERIAL flash partition_boot /home/ttbd/partition_boot.pos.img
on the target.
Note that which fastboot commands are allowed in the target is meant to be severily restricted via target-specific configuration to avoid compromising the system’s security without compromising flexibility.
You can list allowed fastboot commands with (from the example above):
$ tcf fastboot-list TARGETNAME flash: flash partition_boot ^(.+)$ flash_pos: flash_pos partition_boot /home/ttbd/partition_boot.pos.img
-
run
(command_name, *args)¶
-
list
()¶
-
8.1.4.6. Flash the target with JTAGs and other mechanism¶
-
class
tcfl.target_ext_images.
extension
(target)¶ Extension to
tcfl.tc.target_c
to run methods from the image management interface to TTBD targets.Use as:
>>> target.images.set()
Presence of the images attribute in a target indicates imaging is supported by it.
-
retries
= 4¶ When a deployment fails, how many times can we retry before failing
-
wait
= 4¶ When power cycling a target to retry a flashing operation, how much many seconds do we wait before powering on
-
list
()¶ Return a list of image types that can be flashed in this target
-
flash
(images, upload=True)¶ Flash images onto target
>>> target.images.flash({ >>> "kernel-86": "/tmp/file.bin", >>> "kernel-arc": "/tmp/file2.bin" >>> }, upload = True)
or:
>>> target.images.flash({ >>> "vmlinuz": "/tmp/vmlinuz", >>> "initrd": "/tmp/initrd" >>> }, upload = True)
If upload is set to true, this function will first upload the images to the server and then flash them.
Parameters: - images (dict) –
dictionary keyed by (str) image type of things to flash in the target. e.g.:
The types if images supported are determined by the target’s configuration and can be reported with
list()
(or command line tcf images-list TARGETNAME). - upload (bool) – (optional) the image names are local files that need to be uploaded first to the server (this function will take care of that).
- images (dict) –
-
8.1.4.7. Flash the target with ioc_flash_server_app¶
-
class
tcfl.target_ext_ioc_flash_server_app.
extension
(target)¶ Extension to
tcfl.tc.target_c
to the ioc_flash_server_app command to a target on the server in a safe way.To configure this interface on a target, see
ttbl.ioc_flash_server_app.interface
.-
run
(mode, filename, generic_id=None, baudrate=None)¶ Run the ioc_flash_server_app command on the target in the server in a safe way.
Parameters:
-
8.1.4.8. Power on or off the target or any its power rail components¶
This module implements the client side API for controlling the power’s target as well as the hooks to access these interfaces from the command line.
-
class
tcfl.target_ext_power.
extension
(target)¶ Extension to
tcfl.tc.target_c
to interact with the server’s power control interface.Use as:
>>> target.power.on() >>> target.power.off() >>> target.power.cycle() >>> target.power.get() >>> target.power.list()
-
get
()¶ Return a target’s power status, True if powered, False otherwise.
A target is considered on when all of its power rail components are on; fake power components report power state as None and those are not taken into account.
-
list
()¶ Return a list of a target’s power rail components and their status
Returns: dictionary keyed by component number and their state (True if powered, False if not, None if not applicable, for fake power controls)
-
off
(component=None)¶ Power off a target or parts of its power rail
Parameters: component (str) – (optional) name of component to power off, defaults to whole target’s power rail
-
on
(component=None)¶ Power on a target or parts of its power rail
Parameters: component (str) – (optional) name of component to power on, defaults to whole target’s power rail
-
cycle
(wait=None, component=None)¶ Power cycle a target or one of its components
Parameters:
-
reset
()¶ Reset a target
This interface is deprecated.
-
8.1.4.9. Run commands a shell available on a target’s serial console¶
Also allows basic file transmission over serial line.
8.1.4.9.1. Shell prompts¶
Waiting for a shell prompt is quite a harder problem that it seems to be at the beginning.
Problems:
Background processes or (in the serial console, the kernel) printing lines in the middle.
Even with line buffered output, when there are different CRLF conventions, a misplaced newline or carriage return can break havoc.
As well, if a background process / kernel prints a message after the prompt is printed, a
$
will no longer match. The\Z
regex operator cannot be used for the same reason.CRLF conventions make it harder to use the
^
and$
regex expression metacharacteds.ANSI sequences, human doesn’t see/notice them, but to the computer / regular expression they are
Thus, resorting to match a single line is the best bet; however, it is almost impossible to guarantee that it is the last one as the multiple formats of prompts could be matching other text.
-
tcfl.target_ext_shell.
shell_prompts
= ['[-/\\@_~: \\x1b=;\\[0-9A-Za-z]+ [\\x1b=;\\[0-9A-Za-z]*[#\\$][\\x1b=;\\[0-9A-Za-z]* ', '[^@]+@.*[#\\$] ', '[^:]+:.*[#\\$>]']¶ What is in a shell prompt?
-
class
tcfl.target_ext_shell.
shell
(target)¶ Extension to
tcfl.tc.target_c
for targets that support some kind of shell (Linux, Windows) to run some common remote commands without needing to worry about the details.The target has to be set to boot into console prompt, so password login has to be disabled.
>>> target.shell.up()
Waits for the shell to be up and ready; sets it up so that if an error happens, it will print an error message and raise a block exception. Note you can change what is expected as a
shell prompt
.>>> target.shell.run("some command")
Remove remote files (if the target supports it) with:
>>> target.shell.file_remove("/tmp/filename")
Copy files to the target with:
>>> target.shell.file_copy_to("local_file", "/tmp/remote_file")
-
shell_prompt_regex
= <_sre.SRE_Pattern object>¶
-
linux_shell_prompt_regex
= <_sre.SRE_Pattern object>¶ Deprecated, use
shell_prompt_regex
-
setup
(console=None)¶ Setup the shell for scripting operation
In the case of a bash shell, this: - sets the prompt to something easer to latch on to - disables command line editing - traps errors in shell execution
-
up
(tempt=None, user=None, login_regex=<_sre.SRE_Pattern object>, delay_login=0, password=None, password_regex=<_sre.SRE_Pattern object>, shell_setup=True, timeout=None, console=None)¶ Wait for the shell in a console to be ready
Giving it ample time to boot, wait for a
shell prompt
and set up the shell so that if an error happens, it will print an error message and raise a block exception. Optionally login as a user and password.>>> target.shell.up(user = 'root', password = '123456')
Parameters: - tempt (str) – (optional) string to send before waiting for the loging prompt (for example, to send a newline that activates the login)
- user (str) – (optional) if provided, it will wait for login_regex before trying to login with this user name.
- password (str) – (optional) if provided, and a password prompt is found, send this password.
- login_regex (str) – (optional) if provided (string or compiled regex) and user is provided, it will wait for this prompt before sending the username.
- password_regex (str) – (optional) if provided (string or compiled regex) and password is provided, it will wait for this prompt before sending the password.
- delay_login (int) – (optional) wait this many seconds before sending the user name after finding the login prompt.
- shell_setup –
(optional, default) setup the shell up by disabling command line editing (makes it easier for the automation) and set up hooks that will raise an exception if a shell command fails.
By default calls target.shell.setup(); if False, nothing will be called. Arguments are passed:
- console = CONSOLENAME: console where to operate; can be None for the default console.
- timeout (int) – [optional] seconds to wait for the login prompt to appear; defaults to 60s plus whatever the target specifies in metadata bios_boot_time.
- console (str) –
[optional] name of the console where to operate; if None it will update the current default console to whatever the server considers it shall be (the console called default).
If a previous run set the default console to something else, setting it to None will update it to what the server considers shall be the default console (default console at boot).
-
crnl_regex
= <_sre.SRE_Pattern object>¶
-
run
(cmd=None, expect=None, prompt_regex=None, output=False, output_filter_crlf=True, timeout=None, trim=False, console=None)¶ Runs some command as a shell command and wait for the shell prompt to show up.
If it fails, it will raise an exception. If you want to get the error code or not have it raise exceptions on failure, you will have to play shell-specific games, such as:
>>> target.shell.run("failing-command || true")
Files can be easily generated in unix targets with commands such as:
>>> target.shell.run(""" >>> cat > /etc/somefile <<EOF >>> these are the >>> file contents >>> that I want >>> EOF""")
or collecting the output:
>>> target.shell.run("ls --color=never -1 /etc/", output = True) >>> for file in output.split('\r\n'): >>> target.report_info("file %s" % file) >>> target.shell.run("md5sum %s" % file)
Parameters: - cmd (str) – (optional) command to run; if none, only the expectations are waited for (if expect is not set, then only the prompt is expected).
- expect – (optional) output to expect (string or regex) before the shell prompt. This an also be a list of things to expect (in the given order)
- prompt_regex – (optional) output to expect (string or regex) as a shell prompt, which is always to be found at the end. Defaults to the preconfigured shell prompt (NUMBER $).
- output (bool) – (optional, default False) return the output of the command to the console; note the output includes the execution of the command itself.
- output_filter_crlf (bool) – (optional, default True) if we
are returning output, filter out
\r\n
to whatever our CRLF convention is. - trim (bool) – if
output
is True, trim the command and the prompt from the beginning and the end of the output respectively (True) - console (str) – (optional) on which console to run; (defaults to None, the default console).
- origin (str) –
(optional) when reporting information about this expectation, what origin shall it list, eg:
- None (default) to get the current caller
- commonl.origin_get(2) also to get the current caller
- commonl.origin_get(1) also to get the current function
or something as:
>>> "somefilename:43"
Returns str: if
output
is true, a string with the output of the command.Warning
if
output_filter_crlf
is False, this output will be\r\n
terminated and it will be confusing because regex won’t work right away. A quick, dirty, fix>>> output = output.replace('\r\n', '\n')
output_filter_crlf
enabled replaces this output with>>> output = output.replace('\r\n', target.crlf)
-
file_remove
(remote_filename)¶ Remove a remote file (if the target supports it)
-
files_remove
(*remote_filenames)¶ Remove a multiple remote files (if the target supports it)
-
file_copy_to
(local_filename, remote_filename)¶ Send a file to the target via the console (if the target supports it)
Encodes the file to base64 and sends it via the console in chunks of 64 bytes (some consoles are kinda…unreliable) to a file in the target called /tmp/file.b64, which then we decode back to normal.
Assumes the target has python3; permissions are not maintained
Note
it is slow. The limits are not well defined; how big a file can be sent/received will depend on local and remote memory capacity, as things are read hole. This could be optimized to stream instead of just read all, but still sending a way big file over a cheap ASCII protocol is not a good idea. Warned you are.
-
file_copy_from
(local_filename, remote_filename)¶ Send a file to the target via the console (if the target supports it)
Encodes the file to base64 and sends it via the console in chunks of 64 bytes (some consoles are kinda…unreliable) to a file in the target called /tmp/file.b64, which then we decode back to normal.
Assumes the target has python3; permissions are not maintained
Note
it is slow. The limits are not well defined; how big a file can be sent/received will depend on local and remote memory capacity, as things are read hole. This could be optimized to stream instead of just read all, but still sending a way big file over a cheap ASCII protocol is not a good idea. Warned you are.
-
8.1.4.10. Run commands to the target and copy files back and forth using SSH¶
-
class
tcfl.target_ext_ssh.
ssh
(target)¶ Extension to
tcfl.tc.target_c
for targets that support SSH to run remote commands via SSH or copy files around.Currently the target the target has to be set to accept passwordless login, either by:
disabling password for the target user (DANGEROUS!! use only on isolated targets)
storing SSH identities in SSH agents (FIXME: not implemented yet) and provisioning the keys via cloud-init or similar
Use as (full usage example in
/usr/share/tcf/examples/test_linux_ssh.py
):As described in
IP tunnels
, upon which this extension builds, this will only work with a target with IPv4/6 connectivity, which means there has to be an interconnect powered on and reachable for the server andkept active
, so the server doesn’t power it off.ensure the interconnect is powered on before powering on the target; otherwise some targets won’t acquire an IP configuration (as they will assume there is no interconnect); e.g.: on start:
>>> def start(self, ic, target): >>> ic.power.on() >>> target.power.cycle() >>> target.shell.linux_shell_prompt_regex = re.compile('root@.*# ') >>> target.shell.up(user = 'root')
indicate the tunneling system which IP address is to be used:
>>> target.tunnel.ip_addr = target.addr_get(ic, "ipv4")
Use SSH:
>>> exitcode, _stdout, _stderr = target.ssh.call("test -f file_that_should_exist") >>> target.ssh.check_output("test -f file_that_should_exist") >>> output = target.ssh.check_output("cat some_file") >>> if 'what_im_looking_for' in output: >>> do_something() >>> target.ssh.copy_to("somedir/local.file", "remotedir") >>> target.ssh.copy_from("someremotedir/file", "localdir")
FIXME: provide pointers to a private key to use
Troubleshooting:
SSH fails to login; open the report file generated with tcf run, look at the detailed error output:
returncode will show as 255: login error– do you have credentials loaded? is the configuration in the target allowing you to login as such user with no password? or do you have the SSH keys configured?:
E#1 @local eval errored: ssh command failed: echo hello E#1 @local ssh_cmd: /usr/bin/ssh -vp 5400 -q -o BatchMode yes -o StrictHostKeyChecking no root@jfsotc10.jf.intel.com -t echo hello ... E#1 @local eval errored trace: error_e: ('ssh command failed: echo hello', {'ssh_cmd': '/usr/bin/ssh -vp 5400 -q -o BatchMode yes -o StrictHostKeyChecking no root@jfsotc10.jf.intel.com -t echo hello', 'output': '', 'cmd': ['/usr/bin/ssh', '-vp', '5400', '-q', '-o', 'BatchMode yes', '-o', 'StrictHostKeyChecking no', 'root@jfsotc10.jf.intel.com', '-t', 'echo hello'], 'returncode': 255}) E#1 @local returncode: 255
For seeing verbose SSH output to debug, append
-v
to variable _ssh_cmdline_options:>>> target.ssh._ssh_cmdline_options.append("-v")
-
host
= None¶ SSH destination host; this will be filled out automatically with any IPv4 or IPv6 address the target declares, but can be assigned to a new value if needed.
-
login
= None¶ SSH login identity; default to root login, as otherwise it would default to the login of the user running the daemon.
-
port
= None¶ SSH port to use
-
run
(cmd, nonzero_e=None)¶ Run a shell command over SSH, return exitcode and output
Similar to
subprocess.call()
; note SSH is normally run in verbose mode (unless-q
has been set in _ssh_cmdline_options, so the stderr will contain SSH debug information.Parameters: - cmd (str) –
shell command to execute via SSH, substituting any
%(KEYWORD)[ds]
field from the target’s keywords intcfl.tc.target_c.kws
See how to find which fields are available.
- nonzero_e (tcfl.tc.exception) – exception to raise in case of non
zero exit code. Must be a subclass of
tcfl.tc.exception
(i.e.:tcfl.tc.failed_e
,tcfl.tc.error_e
,tcfl.tc.skip_e
,tcfl.tc.blocked_e
) or None (default) to not raise anything and just return the exit code.
Returns: tuple of
exitcode, stdout, stderr
, the two later being two tempfile file descriptors containing the standard output and standard error of running the command.The stdout (or stderr) can be read with:
>>> stdout.read()
- cmd (str) –
-
call
(cmd)¶ Run a shell command over SSH, returning the output
Please see
run()
for argument description; the only difference is this function raises an exception if the call fails.
-
check_call
(cmd, nonzero_e=<class 'tcfl.tc.error_e'>)¶ Run a shell command over SSH, returning the output
Please see
run()
for argument description; the only difference is this function raises an exception if the call fails.
-
check_output
(cmd, nonzero_e=<class 'tcfl.tc.error_e'>)¶ Run a shell command over SSH, returning the output
Please see
run()
for argument description; the only difference is this function returns the stdout only if the call succeeds and raises an exception otherwise.
-
copy_to
(src, dst='', recursive=False, nonzero_e=<class 'tcfl.tc.error_e'>)¶ Copy a file or tree with SCP to the target from the client
Parameters: - src (str) –
local file or directory to copy
Note a relative path will be made relative to the location of the testscript, see
testcase.relpath_to_abs
. - dst (str) – (optional) destination file or directoy (defaults to root’s home directory)
- recursive (bool) – (optional) copy recursively (needed for directories)
- nonzero_e (tcfl.tc.exception) – exception to raise in case of
non zero exit code. Must be a subclass of
tcfl.tc.exception
(i.e.:tcfl.tc.failed_e
,tcfl.tc.error_e
,tcfl.tc.skip_e
,tcfl.tc.blocked_e
) or None (default) to not raise anything and just return the exit code.
- src (str) –
-
copy_from
(src, dst='.', recursive=False, nonzero_e=<class 'tcfl.tc.error_e'>)¶ Copy a file or tree with SCP from the target to the client
Parameters: - src (str) – remote file or directory to copy
- dst (str) – (optional) destination file or directory (defaults to current working directory)
- recursive (bool) – (optional) copy recursively (needed for directories)
- nonzero_e (tcfl.tc.exception) – exception to raise in case of
non zero exit code. Must be a subclass of
tcfl.tc.exception
(i.e.:tcfl.tc.failed_e
,tcfl.tc.error_e
,tcfl.tc.skip_e
,tcfl.tc.blocked_e
) or None (default) to not raise anything and just return the exit code.
8.1.4.11. Copy files from and to the server’s user storage area¶
-
class
tcfl.target_ext_store.
extension
(_target)¶ Extension to
tcfl.tc.target_c
to run methods to manage the files in the user’s storage area in the server.Use as:
>>> files = target.store.list() >>> target.store.upload(REMOTE, LOCAL) >>> target.store.dnload(REMOTE, LOCAL) >>> target.store.delete(REMOTE)
Note these files are, for example:
images for the server to flash into targets (usually handled with the
images
)copying specific log files from the server (eg: downloading TCP dump captures form tcpdump as done by the
conf_00_lib.vlan_pci
network element).the storage area is commong to all targets of the server for each user, thus multiple test cases running in parallel can access it at the same time. Use the testcase’s hash to safely namespace:
>>> tc_hash = self.kws['tc_hash'] >>> target.store.upload(tc_hash + "-NAME", LOCAL)
Presence of the store attribute in a target indicates this interface is supported.
-
upload
(remote, local)¶ Upload a local file to the store
Parameters:
-
dnload
(remote, local)¶ Download a remote file from the store to the local system
Parameters: Returns int: the amount of bytes downloaded
-
delete
(remote)¶ Delete a remote file
Parameters: remote (str) – name of the file to remove from the server
-
list
()¶ List available files and their MD5 sums
8.1.4.12. Plug or unplug things to/from a target¶
This module implements the client side API for controlling the things that can be plugged/unplugged to/from a target.
-
class
tcfl.target_ext_things.
extension
(target)¶ Extension to
tcfl.tc.target_c
to interact with the server’s thing conrol interfaceUse as:
>>> target.things.plug() >>> target.things.unplug() >>> target.things.get() >>> target.things.list()
-
get
(thing)¶ Returns: True if thing is connected, False otherwise
-
list
()¶ Return a list of a target’s things and their state
Returns: dictionary keyed by thing name number and their state (True if plugged, False if not, None if the target/thing are not acquired and thus state information is not available.
-
8.1.4.13. Create and remove network tunnels to the target via the server¶
-
class
tcfl.target_ext_tunnel.
tunnel
(target)¶ Extension to
tcfl.tc.target_c
to create IP tunnels to targets with IP connectivity.Use by indicating a default IP address to use for interconnect ic or explicitly indicating it in the
add()
function:>>> target.tunnel.ip_addr = target.addr_get(ic, "ipv4") >>> target.tunnel.add(PORT) >>> target.tunnel.remove(PORT) >>> target.tunnel.list()
Note that for tunnels to work, the target has to be acquired and IP has to be up on it, which might requires it to be connected to some IP network (it can be a TCF interconnect or any other network).
-
add
(port, ip_addr=None, protocol=None)¶ Setup a TCP/UDP/SCTP v4 or v5 tunnel to the target
A local port of the given protocol in the server is forwarded to the target’s port. Teardown with
remove()
.If the tunnel already exists, it is not recreated, but the port it uses is returned.
- Example: redirect target’s TCP4 port 3000 to a port in the server
that provides
target
(target.kws[‘server’]).>>> server_port = target.tunnel.add(3000) >>> server_name = target.rtb.parsed_url.hostname >>> server_name = target.kws['server'] # alternatively
Now connecting to
server_name:server_port
takes you to the target’s port 3000.Parameters: Returns int local_port: port in the server where to connect to in order to access the target.
-
remove
(port, ip_addr=None, protocol=None)¶ Teardown a TCP/UDP/SCTP v4 or v5 tunnel to the target previously created with
add()
.Parameters:
-
list
()¶ List existing IP tunnels
Returns: list of tuples:: (protocol, target-ip-address, target-port, server-port) Reminder that the server’s hostname can be obtained from:
>>> target.rtb.parsed_hostname
-
-
tcfl.target_ext_tunnel.
cmdline_setup
(argsp)¶
8.1.5. TCF run Application builders¶
Application builder are a generic tool for building applications of different types.
They all use the same interface to make it easy and fast for the test
case writer to specify what has to be built for which BSP of which
target with a very simple specification given to the
tcfl.tc.target()
decorator:
>>> tcfl.tc.target(app_zephyr = { 'x86': "path/to/zephyr_app" },
>>> app_sketch = { 'arc': "path/to/sketch" })
>>> class mytestcase(tcfl.tc.tc_c):
>>> ...
which allows the testcase developer to point the app builders to the locations of the source code and on which BSPs of the targets it shall run and have it deal with the details of inserting the right code to build, deploy, setup and start the testcase.
This allows the testcase writer to focus on writing the test application.
App builders:
- can be made once and reused multiple times
- they are plugins to the testcase system
- keep no state; they need to be able to gather everything from the parameters passed (this is needed so they can be called from multiple threads).
- are always called app_SOMETHING
Note implementation details on tcfl.app.app_c
; drivers can
be added with tcfl.app.driver_add()
.
Currently available application buildrs for:
-
tcfl.app.
import_mp_pathos
()¶
-
tcfl.app.
import_mp_std
()¶
-
tcfl.app.
args_app_src_check
(app_name, app_src)¶ Verify the source specification for a given App Driver
-
tcfl.app.
driver_add
(cls, name=None)¶ Add a new driver for app building
Note the driver will be called as the class name; it is recommended to call then app_something.
-
tcfl.app.
driver_valid
(name)¶
-
tcfl.app.
get_real_srcdir
(origin_filename, _srcdir)¶ Return the absolute version of _srcdir, which might be relative which the file described by origin_file.
-
tcfl.app.
configure
(ab, testcase, target, app_src)¶
-
tcfl.app.
build
(ab, testcase, target, app_src)¶
-
tcfl.app.
deploy
(images, ab, testcase, target, app_src)¶
-
tcfl.app.
setup
(ab, testcase, target, app_src)¶
-
tcfl.app.
start
(ab, testcase, target, app_src)¶
-
tcfl.app.
teardown
(ab, testcase, target, app_src)¶
-
tcfl.app.
clean
(ab, testcase, target, app_src)¶
-
class
tcfl.app.
app_c
¶ Subclass this to create an App builder, provide implementations only of what is needed.
The driver will be invoked by the test runner using the methods
tcfl.app.configure()
,tcfl.app.build()
,tcfl.app.deploy()
,tcfl.app.setup()
,tcfl.app.start()
,tcfl.app.teardown()
,tcfl.app.clean()
.If your App builder does not need to implement any, then it is enough with not specifying it in the class.
Targets with multiple BSPs
When the target contains multiple BSPs the App builders are invoked for each BSP in the same order as they were declared with the decorator
tcfl.tc.target()
. E.g.:>>> @tcfl.tc.target(app_zephyr = { 'arc': 'path/to/zephyr_code' }, >>> app_sketch = { 'x86': 'path/to/arduino_code' })
We are specifying that the x86 BSP in the target has to run code to be built with the Arduino IDE/compiler and the arc core will run a Zephyr app, built with the Zephyr SDK.
If the target is being ran in a BSP model where one or more of the BSPs are not used, the App builders are responsible for providing stub information with
tcfl.tc.target_c.stub_app_add()
. As well, if an app builder determines a BSP does not need to be stubbed, it can also remove it from the target’s list with:>>> del target.bsps_stub[BSPNAME]
Note this removal is done at the specific target level, as each target might have different models or needs.
Note you can use the dictionary
tcfl.tc.tc_c.buffers()
to store data to communicate amongst phases. This dictionary:- will be cleaned in between evaluation runs
- is not multi-threaded protected; take
tcfl.tc.tc_c.buffers_lock()
if you need to access it from different paralell execution methods (setup/start/eval/test/teardown methods are always executed serially). - take care not to start more than once; app builders are setup to start a target only if there is not a field started-TARGETNAME set to True.
-
static
configure
(testcase, target, app_src)¶
-
static
build
(testcase, target, app_src)¶
-
static
deploy
(images, testcase, target, app_src)¶
-
static
setup
(testcase, target, app_src)¶
-
static
start
(testcase, target, app_src)¶
-
static
teardown
(testcase, target, app_src)¶
-
static
clean
(testcase, target, app_src)¶
-
tcfl.app.
make_j_guess
()¶ How much paralellism?
In theoryt there is a make job server that can help throtle this, but in practice this also influences how much virtual the build of a bunch of TCs can do so…
So depending on how many jobs are already queued, decide how much -j we want to give to make.
-
tcfl.app_zephyr.
boot_delay
= {}¶ for each target type, an integer on how long we shall wait to boot Zephyr
-
class
tcfl.app_zephyr.
app_zephyr
¶ Support for configuring, building, deploying and evaluating a Zephyr-OS application.
To setup:
a toolchain capable of building Zephyr has to be installed in the system and the corresponding environment variables exported, such as:
- ZEPHYR_SDK_INSTALL_DIR for the Zephyr SDK
- ISSM_INSTALLATION_PATH for the Intel ISSM toolchain
- ESPRESSIF_TOOLCHAIN_PATH for the Espress toolchain
- XTENSA_SDK for the Xtensa SDK
environment variables set:
- ZEPHYR_TOOLCHAIN_VARIANT (ZEPHYR_GCC_VARIANT before v1.11) pointing to the toolchain to use (zephyr, issm, espressif, xcc, etc…)
- ZEPHYR_BASE pointing to the path where the Zephyr tree is located
note these variables can be put in a TCF configuration file or they can also be specified as options to app_zephyr (see below).
Usage:
Declare in a target app_zephyr and point to the source tree and optionally, provide extra arguments to add to the Makefile invocation:
@tcfl.tc.target("zephyr_board", app_zephyr = 'path/to/app/source') class my_test(tc.tc_c): ...
If extra makefile arguments are needed, a tuple that starts with the path and contains multiple strings can be used:
@tcfl.tc.target("zephyr_board", app_zephyr = ( 'path/to/app/source', 'ZEPHYR_TOOLCHAIN_VARIANT=zephyr', 'ZEPHYR_BASE=some/path', 'OTHEREXTRAARGSTOZEPHYRMAKE')) class my_test(tc.tc_c): ...
to build multiple BSPs of the same target:
@tcfl.tc.target("type == 'arduino101'", app_zephyr = { 'x86': ( 'path/to/app/source/for/x86', 'ZEPHYR_TOOLCHAIN_VARIANT=zephyr', 'ZEPHYR_BASE=some/path', 'OTHEREXTRAARGSTOZEPHYRMAKE' ), 'arc': ( 'path/to/app/source/for/arc', 'ZEPHYR_TOOLCHAIN_VARIANT=zephyr', 'ZEPHYR_BASE=some/path', 'OTHEREXTRAARGSTOZEPHYRMAKE' ) }) class my_test(tc.tc_c): ...
furthermore, common options can be specified in app_zephyr_options (note this is just a string versus a tuple), so the previous example can be simplified as:
@tcfl.tc.target("type == 'arduino101'", app_zephyr = { 'x86': ( 'path/to/app/source/for/x86', 'OTHER-X86-EXTRAS' ), 'arc': ( 'path/to/app/source/for/arc', 'OTHER-ARC-EXTRAS' ) }, app_zephyr_options = \ 'ZEPHYR_TOOLCHAIN_VARIANT=zephyr' \ 'ZEPHYR_BASE=some/path' \ 'OTHER-COMMON-EXTRAS') class my_test(tc.tc_c): ...
The test creator can set the attributes (in the test class or in the target object):
zephyr_filter
zephyr_filter_origin
(optional)
to indicate a Zephyr Sanity Check style filter to apply before building, to be able to skip a test case if a logical expression on the Zephyr build configuration is not satisfied. Example:
@tcfl.tc.target("zephyr_board", app_zephyr = ...) class my_test(tc.tc_c): zephyr_filter = "CONFIG_VALUE_X == 2000 and CONFIG_SOMETHING != 'foo'" zephyr_filter_origin = __file__
-
static
configure
(testcase, target, app_src)¶
-
static
build
(testcase, target, app_src)¶ Build a Zephyr App whichever active BSP is active on a target
-
static
deploy
(images, testcase, target, app_src)¶
-
static
setup
(testcase, target, app_src)¶
-
static
clean
(testcase, target, app_src)¶
-
class
tcfl.app_zephyr.
zephyr
(target)¶ Extension to
tcfl.tc.target_c
to add Zephyr specific APIs; this extension is activated only if any BSP in the target is to be loaded with Zephyr.-
static
sdk_keys
(arch, variant)¶ Figure out the architecture, calling convention and SDK prefixes for this target’s current BSP.
-
config_file_read
(name=None, bsp=None)¶ Open a config file and return its values as a dictionary
Parameters: - name (str) – (optional) name of the configuration file, default to %(zephyr_objdir)s/.config.
- bsp (str) –
(optional) BSP on which to operate; when the target is configured for a BSP model which contains multiple Zephyr BSPs, you will need to specify which one to modify.
This parameter can be omitted if only one BSP is available in the current BSP Model.
Returns: dictionary keyed by CONFIG_ name with its value.
-
config_file_write
(name, data, bsp=None)¶ Write an extra config file called NAME.conf in the Zephyr’s App build directory.
Note this takes care to only write it if the data is new or the file is unexistant, to avoid unnecesary rebuilds.
Parameters: - name (str) – Name for the configuration file; this has to be a valid filename; .conf will be added by the function.
- data (str) –
Data to include in the configuration file; this is (currently) valid kconfig data, which are lines of text with # acting as comment character; for example:
CONFIG_UART_CONSOLE_ON_DEV_NAME="UART_1"
- bsp (str) –
(optional) BSP on which to operate; when the target is configured for a BSP model which contains multiple Zephyr BSPs, you will need to specify which one to modify.
This parameter can be omitted if only one BSP is available in the current BSP Model.
Example
>>> if something: >>> target.zephyr.conf_file_write("mytweaks", >>> 'CONFIG_SOMEVAR=1\n' >>> 'CONFIG_ANOTHER="VALUE"\n')
-
check_filter
(_objdir, arch, board, _filter, origin=None)¶ This is going to be called by the App Builder’s build function to evaluate if we need to filter out a build of a testcase. In any other case, it will be ignored.
Parameters:
-
static
-
class
tcfl.app_sketch.
app_sketch
¶ Driver to build Arduino Sketch applications for flashing into MCU’s BSPs.
Note the setup instructions.
-
static
configure
(testcase, target, app_src)¶
-
static
build
(testcase, target, app_src)¶ Build an Sketh App whichever active BSP is active on a target
-
static
deploy
(images, testcase, target, app_src)¶
-
static
clean
(testcase, target, app_src)¶
-
static
-
class
tcfl.app_manual.
app_manual
¶ This is an App Builder that tells the system the testcase will provide instructions to configure/build/deploy/eval/clean in the testcase methods.
It is used when we are combining App Builders to build for some BSPs with manual methods. Note it can also be used to manually adding stubbing information with:
>>> for bsp_stub in 'BSP1', 'BSP2', 'BSP3': >>> target.stub_app_add(bsp_stub, app_manual, "nothing")
8.1.6. TCF run report drivers¶
See report reference.
8.2. TCF client configuration¶
8.2.1. Configuration API for tcf¶
-
tcfl.config.
path
= []¶ The list of paths where we find configuration information
Path where shared files are stored
-
tcfl.config.
urls
= []¶ List of URLs to servers we are working with
each entry is a tuple of:
- URL (str): the location of the server
- SSL verification (bool): if we are obeying SSL certificate verification
- aka (str): short name for the server
- ca_path (str): path to certificates
-
tcfl.config.
url_add
(url, ssl_ignore=False, aka=None, ca_path=None)¶ Add a TTBD server
Parameters:
-
tcfl.config.
load
(config_path=None, config_files=None, state_path='~/.tcf', ignore_ssl=True)¶ Load the TCF Library configuration
This is needed before you can access from your client program any other module.
Parameters: - config_path – list of strings containing UNIX-style paths (DIR:DIR) to look for config files (conf_*.py) that will be loaded in alphabetical order. An empty path clears the current list.
- config_files – list of extra config files to load
- state_path (str) – (optional) path where to store state
- ignore_ssl (bool) – (optional) wether to ignore SSL verification or not (useful for self-signed certs)
8.3. TCF client internals¶
-
class
tcfl.
msgid_c
(s=None, s_encode=None, l=4, root=None, phase=None, depth=None, parent=None)¶ Accumulate data local to the current running thread.
This is used to generate a random ID (four chars) at the beginning of the testcase run in a thread by instantiating a local object of this class. As we call deeper into functions to do different parts, we instantiate more objects that will add random characters to said ID just for that call (as when the object created goes out of scope, the ID is returned to what it was.
So thus, as the call chain gets deeper, the message IDs go:
abcd abcdef abcdefgh abcdefghij
this allows for easy identification / lookup on a log file or classification.
Note we also keep a depth (usefuly for increasing the verbosity of log messages) and a phase, which we use it to set the phase in which we are running, so log messages don’t have to specify it.
Note this is to be used as:
with msgid_c(ARGS): do stuff... msgid_c.ident() msgid_c.phase() msgid_c.depth()
-
tls
= <thread._local object>¶
-
classmethod
cls_init
()¶
-
classmethod
encode
(s, l)¶
-
classmethod
generate
(l=4)¶
-
classmethod
depth
()¶
-
classmethod
phase
()¶
-
classmethod
ident
()¶
-
classmethod
current
()¶
-
classmethod
parent
()¶
-
-
tcfl.
origin_get
(depth=1)¶
-
tcfl.
origin_get_object
(o)¶
-
tcfl.
origin_get_object_path
(o)¶
8.3.1. Expecting things that have to happen¶
This module implements an expecter object: something that is told to expect things to happen, what to do when they happen (or not).
It is a combination of a poor man’s select() and Tk/Tcl Expect.
We cannot use select() or Tk/TCL Expect or Python’s PyExpect because:
- we need to listen to many things over HTTP connections and the library is quite very simplistic in that sense, so there is maybe no point on hooking up a pure event system.
- we need to be able to listen to poll for data and evaluate it from one or more sources (like serial port, sensor, network data, whatever) in one or more targets all at the same time.
- it is simple, and works quite well
Any given testcase has an expecter object associated with it that can be used to wait for a list of events to happen in one or more targets. This allows, for example, to during the execution of a testcase with multiple nodes, to always have pollers reading (eg) their serial consoles and evaluators making sure no kernel panics are happening in none while at the same time checking for the output that should be coming from them.
The ‘expecter’ object can be also associated just to a single target for a more simple interface when only access to one target is needed.
-
tcfl.expecter.
console_mk_code
(target, console)¶
-
tcfl.expecter.
console_mk_uid
(target, what, console, _timeout, result)¶
-
tcfl.expecter.
console_rx_eval
(expecter, target, regex, console=None, _timeout=None, result=None, uid=None)¶ Check what came on a console and act on it
Parameters: uid (str) – (optional) identifier to use to store offset data
-
tcfl.expecter.
console_rx_flush
(expecter, target, console=None, truncate=False)¶ Reset all the console read markers to 0
When we (for example) power cycle, we start capturing from zero, so we need to reset all the buffers of what we read.
-
tcfl.expecter.
console_rx_poller
(expecter, target, console=None)¶ Poll a console
-
class
tcfl.expecter.
expecter_c
(log, testcase, poll_period=0.25, timeout=30)¶ Object that is told to expect things to happen and what to do when they happen (or
When calling
run()
, a loop is called by that will run repeatedly, waitingpoll_period
seconds in between polling periods until a giventimeout
ellapses.On each loop run, a bunch of functions are run. Functions are added with
add()
and removed withremove()
.Each function polls and stores data, evals said data or both. It can then end the loop by raising an exception. It is also possible that nothing of the interest happened and thus it won’t cause the loop to end. thus it will evaluate nothing. See
add()
for more detailsSome of those functions can be considered ‘expectations’ that have to pass for the full loop to be considered succesful. An boolean to
add()
clarifies that. All those ‘expectations’ have to pass before the run can be considered succesful.The loop will timeout if no evaluating function raises an exception to get out of it and fail with a timeout.
Rationale
This allows to implement simple usages like waiting for something to come off any console with a default
>>> target.wait('STRING', TIMEOUT, console = None)
which also check for other things we know that can come from the OS in a console, like abort strings, kernel panics or dumps for which new know we should abort inmediately with an specific message.
FIXME:
it has to be easy to use and still providing things like
>>> target.wait('STRING', TIMEOUT, console = None) -> { True | False } >>> target.expecter.add(console_rx, (STRING, console),) >>> target.expect('STRING', TIMEOUT) -> raise error/fail/block >>> target.on_rx('STRING', raise failure function)
-
add
(has_to_pass, functor, arguments, origin=None)¶ Add a function to the list of things to poll/evaluate
These functions shall either poll, evaluate or both:
- poll data and store it in the dictionary or anywhere else
where it can be accessed later. Use a unique key into the
dictorionary
buffers
. - evaluate some previously polled data or whichever system
condition and raise an exception to indicate what happened
(from the set
tcfl.tc.pass_e
,tcfl.tc.blocked_e
,tcfl.tc.error_e
,tcfl.tc.failed_e
,tcfl.tc.skip_e
).
Eval functions can check their own timeouts and raise an exception to signal it (normally
tcfl.tc.error_e
)It is also possible that nothing of the interest of this evaluation function happened and thus it will evaluate nothing.
Parameters: has_to_pass (bool) – In order to consider the whole expect sequence a pass, this functor has to declare its evaluation passes by returning anything but None or by raising tcfl.tc.pass_e
.Raises: to stop the run()
loop, raisetcfl.tc.pass_e
,tcfl.tc.blocked_e
,tcfl.tc.error_e
ortcfl.tc.skip_e
.Returns: ignored - poll data and store it in the dictionary or anywhere else
where it can be accessed later. Use a unique key into the
dictorionary
-
console_get_file
(target, console=None)¶ Returns: file descriptor for the file that contains the currently read console. Note the pointer in this file descriptor shall not be modified as it might be being used by expectations. If you need to read from the file, dup it:
>>> f_existing = self.tls.expecter.console_get_file(target, console_id) >>> f = open(f_existing.name)
-
log
(msg, attachments=None)¶
-
poll_period
¶
-
power_on_post
(target=None)¶ Reinitialize things that need flushing for a new power on
-
remove
(functor, arguments)¶
-
run
(timeout=None)¶ Run the expectation loop on the testcase until all expectations pass or the timeout is exceeded.
Parameters: timeout (int) – (optional) maximum time to wait for all expectations to be met (defaults to tcfl.expecter.expecter_c.timeout
)
-
timeout
¶
-
8.3.2. Client API for accessing ttbd’s REST API¶
This API provides a way to access teh REST API exposed by the ttbd daemon; it is divided in two main blocks:
rest_target_broker
: abstracts a remote ttbd server and provides methods to run stuff on targets and and connect/disconnect things on/from targets.rest_*() methods that take a namespace of arguments, lookup the object target, map it to a remote server, execute the method and then print the result to console.
This breakup is a wee arbitrary, it can use some cleanup
-
tcfl.ttb_client.
import_mp_pathos
()¶
-
tcfl.ttb_client.
import_mp_std
()¶
-
tcfl.ttb_client.
tls_var
(name, factory, *args, **kwargs)¶
-
class
tcfl.ttb_client.
rest_target_broker
(state_path, url, ignore_ssl=False, aka=None, ca_path=None)¶ Create a proxy for a target broker, optionally loading state (like cookies) previously saved.
Parameters: - state_path (str) – Path prefix where to load state from
- url (str) – URL for which we are loading state
- ignore_ssl (bool) – Ignore server’s SSL certificate validation (use for self-signed certs).
- aka (str) – Short name for this server; defaults to the hostname (sans domain) of the URL.
- ca_path (str) – Path to SSL certificate or chain-of-trust bundle
Returns: True if information was loaded for the URL, False otherwise
-
projection
= None¶
-
API_VERSION
= 1¶
-
API_PREFIX
= '/ttb-v1/'¶
-
classmethod
rts_cache_flush
()¶
-
tb_state_trash
()¶
-
tb_state_save
(filepath)¶ Save cookies in path so they can be loaded by when the object is created.
Parameters: path (str) – Filename where to save to
-
send_request
(method, url, data=None, json=None, files=None, stream=False, raw=False, timeout=480)¶ Send request to server using url and data, save the cookies generated from request, search for issues on connection and raise and exception or return the response object.
Parameters: Returns: response object
Return type:
-
login
(email, password)¶
-
logout
()¶
-
validate_session
(validate=False)¶
-
rest_tb_target_list
(all_targets=False, target_id=None, projection=None)¶ List targets in this server
Parameters:
-
rest_tb_target_update
(target_id)¶ Update information about a target
Parameters: target_id (str) – ID of the target to operate on Returns: updated target tags
-
rest_tb_target_acquire
(rt, ticket='', force=False)¶
-
rest_tb_target_active
(rt, ticket='')¶
-
rest_tb_target_release
(rt, ticket='', force=False)¶
-
tcfl.ttb_client.
rest_init
(path, url, ignore_ssl=False, aka=None)¶ Initialize access to a remote target broker.
Parameters: Returns: True if information was loaded for the URL, False otherwise
-
tcfl.ttb_client.
rest_shutdown
(path)¶ Shutdown REST API, saving state in path.
Parameters: path (str) – Path to where to save state information
-
tcfl.ttb_client.
rest_login
(args)¶ Login into remote servers.
Parameters: args (argparse.Namespace) – login arguments like -q (quiet) or userid. Returns: True if it can be logged into at least 1 remote server.
-
tcfl.ttb_client.
rest_logout
(args)¶
-
tcfl.ttb_client.
rest_target_print
(rt, verbosity=0)¶ Print information about a REST target taking into account the verbosity level from the logging module
Parameters: rt (dict) – object describing the REST target to print
-
tcfl.ttb_client.
rest_target_list_table
(targetl)¶ List all the targets in a table format, appending * if powered up, ! if owned.
-
tcfl.ttb_client.
cmdline_list
(spec_strings, do_all=False)¶ Return a list of dictionaries representing targets that match the specification strings
Parameters:
-
tcfl.ttb_client.
rest_target_list
(args)¶
-
tcfl.ttb_client.
rest_target_find_all
(all_targets=False)¶ Return descriptors for all the known remote targets
Parameters: all_targets (bool) – Include or not disabled targets Returns: list of remote target descriptors (each being a dictionary).
-
tcfl.ttb_client.
rest_target_acquire
(args)¶ Parameters: args (argparse.Namespace) – object containing the processed command line arguments; need args.target Returns: dictionary of tags Raises: IndexError if target not found
-
tcfl.ttb_client.
rest_target_release
(args)¶ Parameters: args (argparse.Namespace) – object containing the processed command line arguments; need args.target Raises: IndexError if target not found
-
tcfl.util.
argp_setup
(arg_subparsers)¶
-
tcfl.util.
healthcheck
(args)¶
-
tcfl.util.
healthcheck_power
(rtb, rt)¶
8.3.3. Zephyr’s SanityCheck testcase.ini driver for testcase integration¶
This implements a driver to run Zephyr’s Sanity Check testcases
(described with a testcase.ini file) without having to implement any
new descriptions. Details are explained in
tc_zephyr_sanity_c
.
-
exception
tcfl.tc_zephyr_sanity.
ConfigurationError
¶
-
class
tcfl.tc_zephyr_sanity.
SanityConfigParser
(filename)¶ Class to read architecture and test case .ini files with semantic checking
This is only used for the old, .ini support
Instantiate a new SanityConfigParser object
Parameters: filename (str) – Source .ini file to read -
sections
()¶ Get the set of sections within the .ini file
Returns: a list of string section names
-
get_section
(section, valid_keys)¶ Get a dictionary representing the keys/values within a section
Parameters: - section (str) – The section in the .ini file to retrieve data from
- valid_keys (dict) –
A dictionary representing the intended semantics for this section. Each key in this dictionary is a key that could be specified, if a key is given in the .ini file which isn’t in here, it will generate an error. Each value in this dictionary is another dictionary containing metadata:
”default” - Default value if not given- ”type” - Data type to convert the text value to. Simple types
- supported are “str”, “float”, “int”, “bool” which will get converted to respective Python data types. “set” and “list” may also be specified which will split the value by whitespace (but keep the elements as strings). finally, “list:<type>” and “set:<type>” may be given which will perform a type conversion after splitting the value up.
- ”required” - If true, raise an error if not defined. If false
- and “default” isn’t specified, a type conversion will be done on an empty string
Returns: A dictionary containing the section key-value pairs with type conversion and default values filled in per valid_keys
-
-
class
tcfl.tc_zephyr_sanity.
harness_c
¶ A test harness for a Zephyr test
In the Zephyr SanityCheck environment, a harness is a set of steps to verify that a testcase did the right thing.
The default harness just verifies if PROJECT EXECUTION FAILED or PROJECT EXECUTION SUCCESFUL was printed (which is done in
tc_zephyr_sanity_c.eval_50()
).However, if a harness is specified in the testcase/sample YAML with:
harness: HARNESSNAME harness_config: field1: value1 field2: value2 ...
then tc_zephyr_sanity_c._dict_init() will create a harness object of class _harness_HARNESSNAME_c and set it to
tc_zephyr_sanity_c.harness
. Then, during the evaluation phase, we’ll run it intc_zephyr_sanity_c.eval_50()
.The harness object has
evaluate()
which is called to implement the harness on the testcase and target it is running on.For each type of harness, there is a class for it implementing the details of it.
-
evaluate
(_testcase)¶
-
-
class
tcfl.tc_zephyr_sanity.
tc_zephyr_subsanity_c
(name, tc_file_path, origin, zephyr_name, parent, attachments=None)¶ Subtestcase of a Zephyr Sanity Check
A Zephyr Sanity Check testcase might be composed of one or more subtestcases.
We run them all in a single shot using
tc_zephyr_sanity_c
and when done, we parse the output (tc_zephyr_sanity_c._subtestcases_grok) and for each subtestcase, we create one of this sub testcase objects and queue it to be executed in the same target where the main testcase was ran.This is only a construct to ensure they are reported as separate testcases. We already know if they passed or errored or failed, so all we do is report as such.
-
configure_50
()¶
-
eval_50
()¶
-
static
clean
()¶
-
class_result
= 0 (0 0 0 0 0)¶
-
-
class
tcfl.tc_zephyr_sanity.
tc_zephyr_sanity_c
(name, tc_file_path, origin, zephyr_name, subcases)¶ Test case driver specific to Zephyr project testcases
This will generate test actions based on Zephyr project testcase.ini files.
See Zephyr sanitycheck –help for details on the format on these testcase configuration files. A single testcase.ini may specify one or more test cases.
This rides on top of
tcfl.tc.tc_c
driver; tags are translated, whitelist/excludes are translated to target selection language and and a single target is declared (for cases that are not unit tests).is_testcase()
looks fortestcase.ini
files, parses up usingSanityConfigParser
to load it up into memory and calls_dict_init()
to set values and generate the target (when needed) and setup the App Zephyr builder.This is how we map the different testcase.ini sections/concepts to
tcfl.tc.tc_c
data:extra_args = VALUES
: handled asapp_zephyr_options
, passed to the Zephyr App Builder.extra_configs = LIST
: list of extra configuration settingstestcase source is assumed to be in the same directory as the
testcase.ini
file. Passed to the Zephyr App Builder withapp_zephyr
.timeout = VALUE
: use to set the timeout in the testcase expect loop.tags = TAGS
: added to the tags list, with an originskip
: skipped right away with antcfl.tc.skip_e
exceptionslow
: coverted to tagbuild_only
: added asself.build_only
(arch,platform)_(whitelist,exclude)
: what testcase.ini calls arch is a bsp in TCF parlance and platform maps to the zerphyr_board parameter the Zephyr test targets export on their BSP specific tags. Thus, our spec becomes something like:bsp == "ARCH1' or bsp == "ARCH2" ) and not ( bsp == "ARCH3" or bsp == "ARCH4")
arch_whitelist = ARCH1 ARCH2
mapped to@targets += bsp:^(ARCH1|ARCH2)$
arch_exclude = ARCH1 ARCH2
mapped to@targets += bsp:(?!^(ARCH1|ARCH2)$)
platform_whitelist = PLAT1 PLAT2
mapped to@targets += board:^(PLAT1|PLAT2)$
platform_exclude = PLAT1 PLAT2
mapped to@targets += board:(?!^(PLAT1|PLAT2)$)
config_whitelist
andfilter
: filled into the args stored in the testcase as which then gets passed as part of the kws[config_whitelist] dictionary… The build process then calls the action_eval_skip() method to test if the TC has to be skipped after creating the base config.
-
harness
= None¶ Harness to run
-
subtestcases
= None¶ Subtestcases that are identified as part of this (possibly) a container testcase.
-
unit_test_output
= None¶ Filename of the output of the unit test case; when we run a unit testcase, the output does not come from the console system, as it runs local, but from a local file.
-
configure_00
()¶
Dictionary of tags that we want to add to given test cases; the key is the name of the testcase – if the testcase name ends with the same value as in here, then the given list of boolean tags will be patched as True; eg:
{ "dir1/subdir2/testcase.ini#testname" : [ 'ignore_faults', 'slow' ] }
usually this will be setup in a
{/etc/tc,~/.tcf.tcf}/conf_zephy.py
configuration file as:tcfl.tc_zephyr_sanity.tc_zephyr_sanity_c.patch_tags = { "tests/legacy/kernel/test_static_idt/testcase.ini#test": [ 'ignore_faults' ], ... }
-
patch_hw_requires
= {}¶ Dictionary of hw_requires values that we want to add to given test cases; the key is the name of the testcase – if the testcase name ends with the same value as in here, then the given list of hw_requires will be appended as requirements to the target; eg:
{ "dir1/subdir2/testcase.ini#testname" : [ 'fixture_1' ], "dir1/subdir2/testcase2.ini#testname" : [ 'fixture_2' ] }
usually this will be setup in a
{/etc/tc,~/.tcf.tcf}/conf_zephy.py
configuration file as:tcfl.tc_zephyr_sanity.tc_zephyr_sanity_c.patch_hw_requires = { "dir1/subdir2/testcase.ini#testname" : [ 'fixture_1' ], "dir1/subdir2/testcase2.ini#testname" : [ 'fixture_2' ], ... }
-
classmethod
schema_get_file
(path)¶
-
classmethod
schema_get
(filename)¶
-
build_00_tc_zephyr
()¶
-
build_unit_test
()¶ Build a Zephyr Unit Test in the local machine
-
eval_50
()¶
-
classmethod
data_harvest
(domain, name, regex, main_trigger_regex=None, trigger_regex=None, origin=None)¶ Configure a data harverster
After a Zephyr sanity check is executed succesfully, the output of each target is examined by the data harvesting engine to extract data to store in the database with
target.report_data
.The harvester is a very simple state machine controlled by up to three regular expressions whose objective is to extract a value, that will be reported to the datase as a domain/name/value triad.
A domain groups together multiple name/value pairs that are related (for example, latency measurements).
Each line of output will be matched by each of the entries registered with this function.
All arguments (except for origin) will expand ‘%(FIELD)s’ with values taken from the target’s keywords (
tcfl.tc.target_c.kws
).Parameters: - domain (str) – to which domain this measurement applies (eg: “Latency Benchmark %(type)s”); It is recommended this is used to aggregate values to different types of targets.
- name (str) – name of the value (eg: “context switch (microseconds)”)
- regex (str) – regular expression to match against each line of the target’s output. A Python regex ‘(?P<value>SOMETHING)` has to be used to point to the value that has to be extracted (eg: “context switch time (?P<value>[0-9]+) usec”).
- main_trigger_regex (str) – (optional) only look for regex if this regex has already been found. This trigger is then considered active for the rest of the output. This is used to enable searching this if there is a banner in the output that indicates that the measurements are about to follow (eg: “Latency Benchmark starts here).
- trigger_regex (str) –
(optional) only look for regex if this regex has already been found. However, once regex is found, then this trigger is deactivated. This is useful when the measurements are reported in two lines:
measuring context switch like this measurement is X usecs
and thus the regex could catch multiple lines because another measurement is:
measuring context switch like that measurement is X usecs
the regex measurement is (?P<value>[0-9]) usecs would catch both, but by giving it a trigger_regex of measuring context switch like this, then it will catch only the first, as once it is found, the trigger is removed.
- origin (str) – (optional) where this values are coming from; if not specified, it will be the call site for the function.
-
subtc_results_valid
= ('PASS', 'FAIL', 'SKIP')¶
-
subtc_regex
= <_sre.SRE_Pattern object>¶
-
teardown_subtestcases
()¶ Given the output of the testcases, parse subtestcases for each target
-
teardown
()¶
-
clean
()¶
-
filename_regex
= <_sre.SRE_Pattern object>¶
-
filename_yaml_regex
= <_sre.SRE_Pattern object>¶
-
classmethod
is_testcase
(path, _from_path, tc_name, subcases_cmdline)¶ Determine if a given file describes one or more testcases and crete them
TCF’s test case discovery engine calls this method for each file that could describe one or more testcases. It will iterate over all the files and paths passed on the command line files and directories, find files and call this function to enquire on each.
This function’s responsibility is then to look at the contents of the file and create one or more objects of type
tcfl.tc.tc_c
which represent the testcases to be executed, returning them in a list.When creating testcase driver, the driver has to create its own version of this function. The default implementation recognizes python files called test_*.py that contain one or more classes that subclass
tcfl.tc.tc_c
.See examples of drivers in:
tcfl.tc_clear_bbt.tc_clear_bbt_c.is_testcase()
tcfl.tc_zephyr_sanity.tc_zephyr_sanity_c.is_testcase()
examples.test_ptest_runner()
(impromptu testcase driver)
note drivers need to be registered with
tcfl.tc.tc_c.driver_add()
; on the other hand, a Python impromptu testcase driver needs no registration, but the test class has to be called _driver.Parameters: - path (str) – path and filename of the file that has to be examined; this is always a regular file (or symlink to it).
- from_path (str) –
source command line argument this file was found on; e.g.: if path is dir1/subdir/file, and the user ran:
$ tcf run somefile dir1/ dir2/
tcf run found this under the second argument and thus:
>>> from_path = "dir1"
- tc_name (str) – testcase name the core has determine based on the path and subcases specified on the command line; the driver can override it, but it is recommended it is kept.
- subcases_cmdline (list(str)) –
list of subcases the user has specified in the command line; e.g.: for:
$ tcf run test_something.py#sub1#sub2
this would be:
>>> subcases_cmdline = [ 'sub1', 'sub2']
Returns: list of testcases found in path, empty if none found or file not recognized / supported.
-
class_result
= 0 (0 0 0 0 0)¶
Driver to run Clear Linux BBT test suite
The main TCF testcase scanner walks files looking for automation
scripts / testcase scripts and will call
tc_clear_bbt_c.is_testcase()
for each *.t
files on a
directory. The driver will generate one testcase per directory which
will execute all the .t
in there and then execute all the .t
in the any-bundle subdirectory.
The testcases created are instances of tc_clear_bbt_c
; this
class will allocate one interconnect/network and one
*pos_capable* target. In said target it will
install Clear OS (from an image server in the interconnect) during the
deploy phase.
Once then installation is done, it will install any required bundles
and execute all the .t
files in the directory followed by all the
.t
in the any-bundle top level directory.
The output of each .t
execution is parsed with
tap_parse_output()
to generate for each a subcase (an instance
of subcases
) which will report the
individual result of that subcase execution.
Setup steps
To improve the deployment of the BBT tree, a copy can be kept in the server’s rsync image area for initial seeding; to setup, execute in the server:
$ mkdir -p /home/ttbd/images/misc
$ git clone URL/bbt.git /home/ttbd/images/misc/bbt.git
-
tcfl.tc_clear_bbt.
tap_parse_output
(output)¶ Parse TAP into a dictionary
Parameters: output (str) – TAP formatted output Returns: dictionary keyed by test subject containing a dictionary of key/values: - lines: list of line numbers in the output where data was found - plan_count: test case number according to the TAP plan - result: result of the testcase (ok or not ok) - directive: if any directive was found, the text for it - output: output specific to this testcase
-
tcfl.tc_clear_bbt.
ignore_ts
= []¶ Ignore t files
List of individual .t files to ignore, since we can’t filter those on the command line; this can be done in a config file:
>>> tcfl.tc_clear_bbt.ignore_ts = [ >>> 'bundles/XYZ/somefile.t', >>> 'bundles/ABC/someother.t', >>> '.*/any#somefile.sometestcase", >>> ]
or from the command line, byt setting the BBT_IGNORE_TS environment variable:
$ export BBT_IGNORE_TS="bundles/XYZ/somefile.t #bundles/ABC/someother.t .*/any#somefile.sometestcase" $ tcf run bbt.git/bundles/XYZ bbt.git/bundles/ABC
Note all entries will be compiled as Python regular expressions that have to match from the beginning. A whole .t file can be excluded with:
>>> 'bundles/XYZ/somefile.t'
where as a particular testcase in said file:
>>> 'bundles/XYZ/somefile.subcasename'
note those subcases still will be executed (there is no way for the bats tool to be told to ignore) but their results will be ignored.
-
tcfl.tc_clear_bbt.
bundle_run_timeouts
= {'bat-R-extras-R-library_parallel.t': 480, 'bat-desktop-kde-apps-gui.t': 800, 'bat-desktop-kde-gui.t': 800, 'bat-mixer.t': 3000, 'bat-os-testsuite-phoronix.t': 600, 'bat-os-utils-gui-dev-pkgconfig-compile.t': 400, 'bat-perl-basic-perl-use_parallel.t': 1800, 'bat-perl-extras-perl-use_parallel.t': 20000, 'bat-xfce4-desktop-bin-help.t': 800, 'bat-xfce4-desktop-gui.t': 800, 'kvm-host': 480, 'os-clr-on-clr': 640, 'perl-basic': 480, 'perl-extras': 12000, 'quick-perms.t': 3000, 'telemetrics': 480, 'xfce4-desktop': 800}¶ How long to wait for the BBT run to take?
Each test case might take longer or shorter to run, but there is no good way to tell. Thus we hardcode some by bundle name or by .t name.
More settings can be added from configuration by adding to any TCF configuration file entries such as:
>>> tcfl.tc_clear_bbt.bundle_run_timeouts['NAME'] = 456 >>> tcfl.tc_clear_bbt.bundle_run_timeouts['NAME2'] = 3000 >>> tcfl.tc_clear_bbt.bundle_run_timeouts['NAME3'] = 12 >>> ...
-
tcfl.tc_clear_bbt.
bundle_run_pre_sh
= {'bat-perl-basic-perl-use.t': ['export PERL_CANARY_STABILITY_NOPROMPT=1']}¶ Commands to execute before running bats on each .t file (key by .t file name or bundle-under-test name).
Note these will be executed in the bundle directory and templated with
STR % testcase.kws
.
-
tcfl.tc_clear_bbt.
bundle_path_map
= [(<_sre.SRE_Pattern object>, '\\g<1>')]¶ Map bundle path names
Ugly case here; this is a bad hack to work around another one.
In some testcases, the .t file is an actuall shell script that does some setup and then executes a real .t file, which has been detected with different name while we scanned for subcases.
So this allows us to map what we detect (the regex) to what bats is then reported when running that hack (the replacement).
Confusing.
-
tcfl.tc_clear_bbt.
bundle_t_map
= {'bat-dev-tooling.t.autospec_nano': 'build-package.autospec_nano'}¶ Sometime this works in conjunction with
bundle_path_map
above, when a .t file is actually calling another one (maybe in another directory, then you need an entry inbundle_path_map
) to rename the directory to match the entry of this one.Example
In the hardcoded example, bat-dev-tooling.t is just doing something to prep and then exec’ing bats to run t/build-package.t.
So we need to map the directory t out and also rename the entry from build-package.t/something that would be found from scanning the output to what is expected from scanning the testcases in disk.
-
class
tcfl.tc_clear_bbt.
tc_clear_bbt_c
(path, t_file_path)¶ Driver to load Clear Linux BBT test cases
A BBT test case is specified in bats <https://github.com/sstephenson/bats>_ format in a
FILENAME.t
This driver gets called by the core testcase scanning system through the entry point
is_testcase()
–in quite a simplistic way, if it detects the file isFILENAME.t
, it decides it is valid and creates a class instance off the file path.The class instance serves as a testcase script that will:
in the deployment phase (deploy method):
Request a Clear Linux image to be installed in the target system using the provisioning OS.
Deploy the BBT tree to the target’s
/opt/bbt.git
so testcases have all the dependencies they need to run (at this point we assume the git tree is available).Assumes the BBT tree has an specific layout:
DIR/SUBDIR/SUBSUBDIR[/...]/NAME/*.t any-bundles/*.t
on the start phase:
- power cycle the target machine to boot and login into Clear
- install the software-testing bundle and any others specified in an optional ‘requirements’ file. Maybe use a mirror for swupd.
on the evaluation phase:
- run bats on the
FILENAME.t
which we have copied to/opt/bbt.git
. parse the output
into subcases to report their results individually usingtcfl.tc.subtc_c
- run bats on the
-
capture_boot_video_source
= 'screen_stream'¶ Shall we capture a boot video if possible?
-
configure_00_set_relpath_set
(target)¶
-
image
= 'clear'¶ Specification of image to install
default to whatever is configured on the environment (if any) for quick setup; otherwise it can be configured in a TCF configuration file by adding:
>>> tcfl.tc_clear_bbt.tc_clear_bbt_c.image = "clear::24800"
-
swupd_url
= None¶ swupd mirror to use
>>> tcfl.tc_clear_bbt.tc_clear_bbt_c.swupd_url = \ >>> "http://someupdateserver.com/update/"
Note this can use keywords exported by the interconnect, eg:
>>> tcfl.tc_clear_bbt.tc_clear_bbt_c.swupd_url = \ >>> "http://%(MYFIELD)s/update/"
where:
$ tcf list -vv nwa | grep MYFIELD MYFIELD: someupdateserver.com
-
image_tree
= None¶
-
swupd_debug
= False¶ Do we add debug output to swupd?
-
mapping
= {'not ok': 1 (0 0 1 0 0), 'ok': 1 (1 0 0 0 0), 'skip': 1 (0 0 0 0 1), 'todo': 1 (0 1 0 0 0)}¶ Mapping from TAPS output to TCF conditions
This can be adjusted globally for all testcases or per testcase:
>>> tcfl.tc_clear_bbt.tc_clear_bbt_c.mapping['skip'] \ >>> = tcfl.tc.result_c(1, 0, 0, 0, 0) # pass
or for an specific testcase:
>>> tcobject.mapping['skip'] = 'BLCK'
-
boot_mgr_disable
= False¶ Disable efibootmgr and clr-boot-manager
-
fix_time
= None¶ if environ SWUPD_FIX_TIME is defined, set the target’s time to the client’s time
-
deploy
(ic, target)¶
-
start
(ic, target)¶
-
eval
(ic, target)¶
-
teardown_50
()¶
-
static
clean
()¶
-
ignore_stress
= True¶ (bool) ignores stress testcases
-
paths
= {}¶
-
filename_regex
= <_sre.SRE_Pattern object>¶
-
classmethod
is_testcase
(path, _from_path)¶ Determine if a given file describes one or more testcases and crete them
TCF’s test case discovery engine calls this method for each file that could describe one or more testcases. It will iterate over all the files and paths passed on the command line files and directories, find files and call this function to enquire on each.
This function’s responsibility is then to look at the contents of the file and create one or more objects of type
tcfl.tc.tc_c
which represent the testcases to be executed, returning them in a list.When creating testcase driver, the driver has to create its own version of this function. The default implementation recognizes python files called test_*.py that contain one or more classes that subclass
tcfl.tc.tc_c
.See examples of drivers in:
tcfl.tc_clear_bbt.tc_clear_bbt_c.is_testcase()
tcfl.tc_zephyr_sanity.tc_zephyr_sanity_c.is_testcase()
examples.test_ptest_runner()
(impromptu testcase driver)
note drivers need to be registered with
tcfl.tc.tc_c.driver_add()
; on the other hand, a Python impromptu testcase driver needs no registration, but the test class has to be called _driver.Parameters: - path (str) – path and filename of the file that has to be examined; this is always a regular file (or symlink to it).
- from_path (str) –
source command line argument this file was found on; e.g.: if path is dir1/subdir/file, and the user ran:
$ tcf run somefile dir1/ dir2/
tcf run found this under the second argument and thus:
>>> from_path = "dir1"
- tc_name (str) – testcase name the core has determine based on the path and subcases specified on the command line; the driver can override it, but it is recommended it is kept.
- subcases_cmdline (list(str)) –
list of subcases the user has specified in the command line; e.g.: for:
$ tcf run test_something.py#sub1#sub2
this would be:
>>> subcases_cmdline = [ 'sub1', 'sub2']
Returns: list of testcases found in path, empty if none found or file not recognized / supported.
-
class_result
= 0 (0 0 0 0 0)¶
8.4. Target metadata¶
Each target has associated a list of metadata, some of them common to
all targets, some of them driver or target type specific that you can
get on the command line with tcf list -vvv TARGETNAME
or in a test
script in the dictionary tcfl.tc.target_c.rt
(for Remote
Target), or more generally in the keywor dictionary
tcfl.tc.target_c.kws
.
Metada is specified:
in the server’s read only configuration by setting tags to the target during creation of the
ttbl.test_target
object, by passing a dictionary tottbl.config.target_add()
>>> ttbl.config.target_add( >>> ttbl.tt.tt_serial(....), >>> tags = { >>> 'linux': True, >>> ... >>> 'pos_capable': True, >>> 'pos_boot_interconnect': "nwb", >>> 'pos_boot_dev': "sda", >>> 'pos_partsizes': "1:20:50:15", >>> 'linux_serial_console_default': 'ttyUSB0' >>> }, >>> target_type = "Intel NUC5i5425OU")
or by calling
ttbl.test_target.tags_update()
on an already created target>>> ttbl.config.targets['nwb'].tags_update({ >>> 'mac_addr': '00:50:b6:27:4b:77' >>> })
during runtime, from the client with tcf property-set:
$ tcf property-set TARGETNAME PROPERTY VALUE
or calling
tcfl.tc.target_c.property_set()
:>>> target.property_set("PROPERTY", "VALUE")
8.4.1. Common metadata¶
bios_boot_time (int): approx time in seconds the system takes to boot before it can be half useful (like BIOS can interact, etc).
Considered as zero if missing.
id (str): name of the target
fullid (str): Full name of the target that includes the server’s short name (AKA); SERVERAKA/ID.
TARGETNAME (bool) True
bsp_models (list of str): ways in which the BSPs in a target (described in the bsps dictionary) can be used.
If a target has more than one BSP, how can they be combined? e.g:
- BSP1
- BSP2
- BSP1+2
- BSP1+3
would describe that in a target with three BSPs, 1 and 2 can be used individually or the target can operate using 1+2 or 1+3 together (but not 3+2 or 1+2+3).
bsps (dictionary of dictionaries keyed by BSP name): describes each BSP the target contains
A target that is capable of computing (eg: an MCU board vs let’s say, a toaster) would describe a BSP; each BSP dictionary contains the following keys:
- cmdline (str): [QEMU driver] command line used to boot a QEMU target
- zephyr_board (str): [Zephyr capable targets] identifier to use for building Zephyr OS applications for this board as the BOARD parameter to the Zephyr build process.
- zephyr_kernelname (str): [Zephyr capable targets] name of the file to use as Zephyr image resulting from the Zephyr OS build process.
- sketch_fqbn (str): [Sketch capable targets] identifier to use for building Arduino applications for this board.
- sketch_kernelname (str): [Sketch capable targets] name of the file to use as image resulting from the Sketch build process.
disabled (bool): True if the target is disabled, False otherwise.
fixture_XYZ (bool): when present and True, the target exposes feature (or a test fixture) named XYZ
interconnects (dictionary of dictionaries keyed by interconnect name):
When a target belongs to an interconnect, there will be an entry here naming the interconnect. Note the interconnect might be in another server, not necessarily in the same server as the target is.
Each interconnect might have the following (or other fields) with address assignments, etc:
- bt_addr (str): Bluetooth Address (48bits HH:HH:HH:HH:HH:HH, where HH are two hex digits) that will be assigned to this target in this interconnect (when describing a Bluetooth interconnect)
- mac_addr (str): Ethernet Address (48bits HH:HH:HH:HH:HH:HH, where HH are two hex digits) that will be assigned to this target in this interconnect (when describing ethernet or similar interconnects)
- ipv4_addr (str): IPv4 Address (32bits, DDD.DDD.DDD.DDD, where DDD are decimal integers 0-255) that will be assigned to this target in this interconnect
- ipv4_prefix_len (int): length in bits of the network portion of the IPv4 address
- ipv6_addr (str): IPv6 Address (128bits, standard ipv6 colon format) that will be assigned to this target in this interconnect
- ipv4_prefix_len (int): length in bits of the network portion of the IPv6 address
idle_poweroff (int): seconds the target will be idle before the system will automatically power it off (if 0, it will never be powered off).
interfaces (list of str): list of interface names
interfaces_names (str): list of interface names as a single string separated by spaces
mutex (str): who is the current owner of the target
owner (str): who is the current owner of the target
path (str): path where the target state is maintained
things (list of str): list of names of targets that can be plugged/unplugged to/from this target.
type (str): type of the target
8.4.2. Interface specific metadata¶
- consoles (list of str): [console interface] names of serial consoles supported by the target
- debug-BSP-gdb-tcp-port (int): [debug interface] TCF port on which to reach a GDB remote stub for the given BSP (depending on target capability).
- images-TYPE-QUALIFIER (str): [imaging interface] File name of image that was flashed of a given type and qualifier; eg images-kernel-arc with a value of /var/cache/ttbd-production/USERNAME/somefile.elf was an image flashed as a kernel for architecture ARC).
- openocd.path (str): [imaging interface] path of the OpenOCD implementation being used
- openocd.pid (unsigned): [imaging interface] PID of the OpenOCD process driving this target
- openocd.port (unsigned): [imaging interface] Base TCP port where we can connect to the OpenOCD process driving this target
- powered (bool): [power control interface] True if the target is powered up, False otherwise.
- power_state (bool): [power control interface] ‘on’ if the target is powered up, ‘off’ otherwise. (FIXME: this has to be unified with powered)
8.4.3. Driver / targe type specific metadata¶
hard_recover_rest_time (unsigned): [ttbl.tt.tt_flasher driver, OpenOCD targets] time the target has to be kept off when power-cycling to recover after a failed reset, reset halt or reset after power-cycle when flashing.
When the flasher (usually OpenOCD) cannot make the target comply, the driver will power cycle it to try to get it to a well known state.
linux (bool): True if this is a target that runs linux
quark_se_stub (bool): FIXME: DEPRECATED
qemu_bios_image (str): [QEMU driver] file name used for the target’s BIOS (depending on configuration)
qemu_ro_image (str): [QEMU driver] file name used for the target’s read-only image (depending on configuration)
qemu-image-kernel-ARCH (str): [QEMU driver] file used as a kernel to boot a QEMU target (depending on configuration)
qemu-cmdline-ARCH (str): [QEMU driver] command line used to launch the QEMU process implementing the target (depending on configuration)
ifname (str): [QEMU driver / SLIP] interface created to hookup the SLIP networking tun/tap into the vlan to connect to external networks or other VMs [FIXME: make internal]
slow_flash_factor (int): [[ttbl.tt.tt_flasher driver, OpenOCD targets] amount to scale up the timeout to flash into an OpenOCD capable target. Some targets have a slower flashing interface and need more time.
tunslip-ARCH-pid (int): [QEMU driver] PID of the process implementing tunslip for a QEMU target.
ram_megs (int): Megs of RAM supported by the target
ssh_client (bool): True if the target supports SSH
8.4.4. Provisioning OS specific metadata¶
linux_serial_console_default: which device the target sees as the system’s serial console connected to TCF’s first console.
If DEVICE (eg: ttyS0) is given, Linux will be booted with the argument console=DEVICE,115200.
linux_options_append: string describing options to append to a Linux kernel boot command line.
pos_capable: dictionary describing a target as able to boot into a Provisioning OS to perform target provisioning.
Keys are the same as described in
tcfl.pos.capability_fns
(e.g: boot_to_pos, boot_config, etc)Values are only one of each of each second level keys in the
tcfl.pos.capability_fns
dictionary (e.g.: pxe, uefi…).This indicates the system which different methodologies have to be used for the target to get into Provisioning OS mode, configure bootloader, etc.
pos_http_url_prefix: string describing the prefix to send for loading a Provisoning OS kernel/initramfs. See here.
Python’s
%(NAME)s
codes can be used to substitute values from the target’s tags or the interconnect’s.Example:
pos_http_url_prefix = "http://192.168.97.1/ttbd-pos/%(bsp)s/"
bsp
is common to use as the images for an architecture won’t work for another.bsp
is taken from the target’s tagbsp
. If not present, the first BSP (in alphabetical order) declared in the target tagsbsps
will be used.
pos_image: string describing the image used to boot the target in POS mode; defaults to tcf-live.
For each image, in the server,
ttbl.dhcp.pos_cmdline_opts
describes the kernel options to append to the kernel image, which is expected to be found in http://:data:`POS_HTTP_URL_PREFIX <pos_http_url_prefix>`/vmlinuz-POS_IMAGE
pos_partscan_timeout: maximum number of seconds we wait for a partition table scan to show information about the partition table before we consider it is really empty (some HW takes a long time).
This is used in
tcfl.pos.fsinfo_read
.
pos_reinitialize: when set to any value, the client provisioning code understands the boot drive for the target has to be repartitioned and reformated before provisioning:
$ tcf property-set TARGET pos_reinitialize True $ tcf run -t TARGETs <something that deploys>
uefi_boot_manager_ipv4_regex: allows specifying a Python regular expression that describes the format/name of the UEFI boot entry that will PXE boot off the network. For example:
>>> ttbl.config.targets['PC-43j'].tags_update({ >>> 'uefi_boot_manager_ipv4_regex': 'UEFI Network' >>> })
Function (tcfl.pos_uefi._efibootmgr_setup()* can use this if the defaults do not work
target.pos.deploy_image()
reports:Cannot find IPv4 boot entry, enable manually
even after the PXE boot entry has been enabled manually.
Note this will be compiled into a Python regex.
8.5. ttbd HTTP API¶
The HTTP API exported by ttbd is a very basic REST model which is goint to at some point be coverted to odata or JSON-RPC.
FIXME: this document is work in progress
It is recommended to access the server using the Python API as defined
in tcfl.tc.target_c
.
>>> import tcfl
>>> target = tcfl.target_c.create_from_cmdline_args(None, "TARGETNAME")
>>> target.power.off()
If raw HTTP access is needed, it is a good way to double check if things are being done right to run the tcf client with –debug, since it will print the HTTP requests done, to cross check:
$ tcf --debug login USERNAME
...
I ttb_client.rest_login():679: https://localhost:5004: checking for a valid session
Login to https://localhost:5004 as USERNAME
Password:
D ttb_client.send_request():275: send_request: PUT https://localhost:5004/ttb-v1/login
D connectionpool._new_conn():813: Starting new HTTPS connection (1): localhost:5004
send: 'PUT /ttb-v1/login HTTP/1.1\r\n
Host: localhost:5004\r\n
Connection: keep-alive\r\n
Accept-Encoding: gzip, deflate\r\n
Accept: */*\r\n
User-Agent: python-requests/2.20.0\r\n
Cookie: remember_token=USERNAME|e72e83d4ae70d6ef484da8cec6fa1c4d93833327dabda9566bb12091038cfbe982f7ec3b1d269ae6316969489e546bf797ce564c8daef89f13451505ae5b5a37; session=.eJxNj0-LgzAUxL_K8s5S2rRehL0sacVDXlAs8nKRbutu_inFKhJLv_tmb70NMwzzmye0P2P30JBN49wl0JobZE_4-IYMJD-maK8rNuUO-ZfHWkRNQXJvlC13ZB1Dro2sBRNWOcULJnmR4uqYaJSWuQiqLw9oK68sMVmfvGwo9mgvetqiPe5VUy6KUxB5pZHREvcWsVZG9UVKVsdN70TkEA0dMD9vY6ZlTYvkkcueF-xPTtW_n_BKwNy6YTJT2FzmSbdTuHeQDbP3b8n_OzDDxQVIYH50Y_vmvP4Ax1dagQ.ENlzCw.jIg8VhRQADhEZiyNtCh2A6HRFsk\r\n
Content-Length: 60\r\n
Content-Type: application/x-www-form-urlencoded\r\n
\r\n
password=PASSWORD&email=USERNAME'
reply: 'HTTP/1.1 200 OK\r\n'
header: Vary: Cookie
header: Set-Cookie: remember_token="USERNAME|1efa96aafcf99f21c105d8323d161d205fa8bd1e7aa2ed3fcab38daba7f0c748280941d478ed4e3fc9b4e5f6606d35abad1e23666ee56b55be6adb560f8748e9"; Expires=Fri, 20-Dec-2019 21:03:40 GMT; Path=/
header: Set-Cookie: session=.eJyNjz1rwzAYhP9KeefUJEq8GAqlKDEeJGHjYKTFOIkafdkJjoyRQ_571a1jt4Pnjrt7Qvs9yoeCzI-TXEGrL5A94e0EGTC8T6k5L7QpNxR_OVqTqHlg2Glhyg03FlGsNKsJIkZYgQvEcJHSxSLSCMVyEkRf7qipnDAcsfrgWMNjjm9Jz9fU7LeiKWeBeSB5pSjic-ybyVJp0RcpNyp2OkviDtLwHc2P68gUq_nMcNxljjPtD1bU1w94rUBf5OC1D0k3edX6cJeQDZNzf8jvO9BDZ0Nyl6Nc3q-3YemcXD714KVLzrceVjA95Nj-x_r6AYOEbek.ENmCrA.JjU3Fqwtw2jvYjbJCJCKYMyR1Gs; HttpOnly; Path=/
header: Content-Length: 92
header: Content-Type: application/json
header: Server: TornadoServer/5.0.2
...
Arguments are encoded as HTTP form fields; for non escalar arguments, the values are JSON encoded.
Authentication is done via cookies, which also include the username,
stored in ~/.tcf/cookies-SERVERNAME.pickle
, which can be loaded in
python with:
>>> import cPickle, requests
>>> cookie = cPickle.load(open("/home/USER/.tcf/cookies-httpsservername000.pickle"))
and then this can be used to make a request to, for example, the console interface:
>>> r = requests.get("https://servername:5000/ttb-v1/targets/r14s40/console/read",
>>> verify = False, data = dict(offset = 20, component = "sol0_ssh"),
>>> cookies = cookie)
>>> r.text
8.5.1. Basic target interface¶
Common arguments:
ticket
: ticket under which the current owner is holding the target; this is string unique identifier, same as used to aquire:$ tcf login username $ tcf -t BLAHBLAH acquire TARGETNAME
means TARGETNAME is now acquired by username:BLAHBLAH since the ticket is BLAHBLAH.
component
: for interfaces that understand multiple implementations or multiplex to multiple components (eg: power, console, images) this is a string that indicates to which instance to direct the request.
Warning
FIXME: this document is work in progress, to get more info for the
time being, tcf.git/ttbd
unpacks most of these calls (as per
the @app.route
decorator); needed arguments can be extracted by
look at what is obtained with flask.request.form.get()
Endpoints:
/ttb/v1/login
PUT/ttb/v1/logout
PUT/ttb/v1/validate_session
GET/ttb/v1/targets
GET/ttb/v1/targets/TARGETNAME/
GET/ttb/v1/targets/TARGETNAME/acquire
PUT/ttb/v1/targets/TARGETNAME/active
PUT/ttb/v1/targets/TARGETNAME/release
PUT/ttb/v1/targets/TARGETNAME/enable
PUT/ttb/v1/targets/TARGETNAME/disable
PUT/ttb/v1/targets/TARGETNAME/property_set
PUT/ttb/v1/targets/TARGETNAME/property_get
GET/ttb/v1/targets/TARGETNAME/ip_tunnel
POST/ttb/v1/targets/TARGETNAME/ip_tunnel
DELETE/ttb/v1/targets/TARGETNAME/ip_tunnel
GET/ttb/v1/files/FILENAME
POST upload a file to user’s storage/ttb/v1/files/FILENAME
GET download a file from the user’s storage/ttb/v1/files/FILENAME
DELETE delete a file from the user’s storage/ttb/v1/files
GET list files in the user’s storage
8.5.2. General interface access¶
Functionality to manipulate / access targets is implemented by
separate unrelated and available at the endpoints
/ttb/v1/TARGETNAME/INTERFACENAME/METHODNAME
(with PUT, GET, POST,
DELETE depending ont he operation).
Note different targets might implement different interfaces, and thus
not all of them are always avaialble. Interfaces supported by a target
are available by listing the target’s metadata (with
/ttb/v1/targets[/TARGETNAME]
) and looking for the value of the
interfaces field.
Warning
FIXME: this document is work in progress, to get more info for the
time being, tcf.git/ttbd/ttbl/*.py
implements these interfaces
by instantiating a ttbl.tt_interface
and implementing
calls to METHOD_NAME
where method is put, post, get or
delete. The dictionary args passed is a dictionary with the
arguments passed in the HTTP call.
8.5.2.1. Power¶
/ttb/v1/targets/TARGETNAME/power/off
PUT/ttb/v1/targets/TARGETNAME/power/on
PUT/ttb/v1/targets/TARGETNAME/power/cycle
PUT/ttb/v1/targets/TARGETNAME/power/get
GET/ttb/v1/targets/TARGETNAME/power/list
GET
8.5.2.2. Console¶
/ttb/v1/targets/TARGETNAME/console/setup
PUT/ttb/v1/targets/TARGETNAME/console/list
GET/ttb/v1/targets/TARGETNAME/console/enable
PUT/ttb/v1/targets/TARGETNAME/console/disable
PUT/ttb/v1/targets/TARGETNAME/console/state
GET/ttb/v1/targets/TARGETNAME/console/size
GET/ttb/v1/targets/TARGETNAME/console/read
GET/ttb/v1/targets/TARGETNAME/console/write
PUT
8.5.2.3. Capture¶
/ttb/v1/targets/TARGETNAME/capture/start
POST/ttb/v1/targets/TARGETNAME/capture/stop_and_get
POST/ttb/v1/targets/TARGETNAME/capture/list
GET
8.5.2.4. Buttons¶
/ttb/v1/targets/TARGETNAME/buttons/sequence
PUT/ttb/v1/targets/TARGETNAME/buttons/list
GET
8.5.2.5. Fastboot¶
/ttb/v1/targets/TARGETNAME/fastboot/run
PUT/ttb/v1/targets/TARGETNAME/fastboot/list
GET
8.5.2.6. Images¶
/ttb/v1/targets/TARGETNAME/images/flash
PUT/ttb/v1/targets/TARGETNAME/images/list
GET
8.5.2.7. IOC_flash_server_app¶
/ttb/v1/targets/TARGETNAME/ioc_flash_server_app/run
GET
8.5.2.8. Things¶
/ttb/v1/targets/TARGETNAME/things/list
GET/ttb/v1/targets/TARGETNAME/things/get
GET/ttb/v1/targets/TARGETNAME/things/plug
PUT/ttb/v1/targets/TARGETNAME/things/unplug
PUT
8.5.3. Examples¶
8.5.3.1. Example: listing targets over HTTP¶
What the command line tool would be:
$ tcf list -vv
anything that has an @ sign is being used actively by TCF another -v will get you the same JSON that either of:
$ wget --no-check-certificate https://SERVERNAME:5000/ttb-v1/targets
$ curl -k https://SERVERNAME:5000/ttb-v1/targets/
will return; in JSON, you can tell a target is idle if owner is None or missing; if it has a value, it is the user ID of whoever has it:
{
...
'id': 'r14s40',
....
'owner': None,
...
}
now:
$ tcf login USERNAME
Password: <....>
$ tcf acquire r14s40
$ tcf list -vvv r14s40
{ ...
'id': u'r14s40',
...
'owner': u'USERNAME',
...
}
In Python:
import requests
r = requests.get("https://SERVERNAME:5000/ttb-v1/targets", verify = False)
r.json()
8.5.3.2. Example: Reading the console(s) from HTTP¶
You can see which consoles are available with either or:
$ tcf acquire r14s40
$ tcf console-list r14s40
$ tcf list -vvv r14s40 | grep consoles
You can continuously read with:
$ tcf console-read --follow r14s40
in Python, HTTP I do like:
>>> import cPickle, requests
>>> cookie = cPickle.load(open("/home/user/.tcf/cookies-httpsSERVERNAME5000.pickle"))
>>> r = requests.get("https://SERVERNAME:5000/ttb-v1/targets/r14s40/console/read",
... verify = False, data = dict(offset = 20, component = "sol0_ssh"), cookies = cookie)
>>> r.text
So you put this in a loop, which is what tcf console-read –follow
does in (tcf.git/tcfl/target_ext_console.py:_cmdline_console_read
)
8.6. ttbd Configuration API for targets¶
-
class
conf_00_lib.
vlan_pci
¶ Power controller to implement networks on the server side.
Supports:
connecting the server to a physical net physical networks with physical devices (normal or VLAN networks)
creating internal virtual networks with macvtap http://virt.kernelnewbies.org/MacVTap so VMs running in the host can get into said networks.
When a physical device is also present, it is used as the upper device (instead of a bridge) so traffic can flow from physical targets to the virtual machines in the network.
tcpdump capture of network traffic
This behaves as a power control implementation that when turned:
- on: sets up the interfaces, brings them up, start capturing
- off: stops all the network devices, making communication impossible.
Capturing with tcpdump
Can be enabled setting the target’s property tcpdump:
$ tcf property-set TARGETNAME tcpdump FILENAME
this will have the target dump all traffic capture to a file called FILENAME in the daemon file storage area for the user who owns the target. The file can then be recovered with:
$ tcf store-download FILENAME
FILENAME must be a valid file name, with no directory components.
Note
Note this requires the property tcpdump being registered in the configuration with
>>> ttbl.test_target.properties_user.add('tcpdump')
so normal users can set/unset it.
Example configuration (see naming networks):
>>> target = ttbl.test_target("nwa") >>> target.interface_add("power", ttbl.power.interface(vlan_pci())) >>> ttbl.config.interconnect_add( >>> target, >>> tags = { >>> 'ipv4_addr': '192.168.97.1', >>> 'ipv4_prefix_len': 24, >>> 'ipv6_addr': 'fc00::61:1', >>> 'ipv6_prefix_len': 112, >>> 'mac_addr': '02:61:00:00:00:01:', >>> })
Now QEMU targets (for example), can declare they are part of this network and upon start, create a tap interface for themselves:
$ ip link add link _bnwa name tnwaTARGET type macvtap mode bridge $ ip link set tnwaTARGET address 02:01:00:00:00:IC_INDEX up
which then is given to QEMU as an open file descriptor:
-net nic,model=virtio,macaddr=02:01:00:00:00:IC_INDEX -net tap,fd=FD
(targets implemented by
conf_00_lib_pos.target_qemu_pos_add()
andconf_00_lib_mcu.target_qemu_zephyr_add()
with VMs implement this behaviour).Notes:
- keep target names short, as they will be used to generate network interface names and those are limited in size (usually to about 12 chars?), eg tnwaTARGET comes from nwa being the name of the network target/interconnect, TARGET being the target connected to said interconnect.
- IC_INDEX: is the index of the TARGET in the interconnect/network;
it is recommended, for simplicty to make them match with the mac
address, IP address and target name, so for example:
- targetname: pc-04
- ic_index: 04
- ipv4_addr: 192.168.1.4
- ipv6_addr: fc00::1:4
- mac_addr: 02:01:00:00:00:04
If a tag named mac_addr is given, containing the MAC address of a physical interface in the system, then it will be taken over as the point of connection to external targets. Connectivity from any virtual machine in this network will be extended to said network interface, effectively connecting the physical and virtual targets.
Warning
DISABLE Network Manager’s (or any other network manager) control of this interface, otherwise it will interfere with it and network will not operate.
Follow these steps
System setup:
- ttbd must be ran with CAP_NET_ADMIN so it can create network
interfaces. For that, either add to systemd’s
/etc/systemd/system/ttbd@.service
:CapabilityBoundingSet = CAP_NET_ADMIN AmbientCapabilities = CAP_NET_ADMIN
or as root, give ttbd the capability:
# setcap cap_net_admin+pie /usr/bin/ttbd
udev’s /etc/udev/rules.d/ttbd-vlan:
SUBSYSTEM == "macvtap", ACTION == "add", DEVNAME == "/dev/tap*", GROUP = "ttbd", MODE = "0660"
This is needed so the tap devices can be accessed by user ttbd, which is the user that runs the daemon.
Remember to reload udev’s configuration with udevadm control –reload-rules.
This is already taken care by the RPM installation.
Fixture setup
Select a network interface to use (it can be a USB or PCI interface); find out it’s MAC address with ip link show.
add the tag mac_addr with said address to the tags of the target object that represents the network to which which said interface is to be connected; for example, for a network called nwc
>>> target = ttbl.test_target("nwa") >>> target.interface_add("power", ttbl.power.interface(vlan_pci())) >>> ttbl.config.interconnect_add( >>> target, >>> tags = { >>> 'ipv4_addr': '192.168.97.1', >>> 'ipv4_prefix_len': 24, >>> 'ipv6_addr': 'fc00::61:1', >>> 'ipv6_prefix_len': 112, >>> 'mac_addr': "a0:ce:c8:00:18:73", >>> })
or for an existing network (such as the configuration’s default nwa):
# eth dongle mac 00:e0:4c:36:40:b8 is assigned to NWA ttbl.config.targets['nwa'].tags_update(dict(mac_addr = '00:e0:4c:36:40:b8'))
Furthermore, default networks nwa, nwb and nwc are defined to have a power control rail (versus an individual power controller), so it is possible to add another power controller to, for example, power on or off a network switch:
ttbl.config.targets['nwa'].pc_impl.append( ttbl.pc.dlwps7("http://USER:PASSWORD@sp5/8"))
This creates a power controller to switch on or off plug #8 on a Digital Loggers Web Power Switch named sp5 and makes it part of the nwa power control rail. Thus, when powered on, it will bring the network up up and also turn on the network switch.
add the tag vlan to also be a member of an ethernet VLAN network (requires also a mac_addr):
>>> target = ttbl.test_target("nwa") >>> target.interface_add("power", ttbl.power.interface(vlan_pci())) >>> ttbl.config.interconnect_add( >>> target, >>> tags = { >>> 'ipv4_addr': '192.168.97.1', >>> 'ipv4_prefix_len': 24, >>> 'ipv6_addr': 'fc00::61:1', >>> 'ipv6_prefix_len': 112, >>> 'mac_addr': "a0:ce:c8:00:18:73", >>> 'vlan': 30, >>> })
in this case, all packets in the interface described by MAC addr a0:ce:c8:00:18:73 with tag 30.
lastly, for each target connected to that network, update it’s tags to indicate it:
ttbl.config.targets['TARGETNAME-NN'].tags_update( { 'ipv4_addr': "192.168.10.30", 'ipv4_prefix_len': 24, 'ipv6_addr': "fc00::10:30", 'ipv4_prefix_len': 112, }, ic = 'nwc')
By convention, the server is .1, the QEMU Linux virtual machines are set from .2 to .10 and the QEMU Zephyr virtual machines from .30 to .45. Physical targets are set to start at 100.
Note the networks for targets and infrastructure have to be kept separated.
-
on
(target, _component)¶ Power on the component
Parameters: - target (ttbl.test_target) – target on which to act
- component (str) – name of the power controller we are modifying
8.6.1. Configuration API for capturing audio and video¶
These capture objects are meant to be fed to the capture interface declaration of a target in the server, for example, in any server configuration file you could have added a target and then a capture interface can be added with:
ttbl.config.targets['TARGETNAME'].interface_add(
"capture",
ttbl.capture.interface(
screen = "hdmi0_screenshot",
screen_stream = "hdmi0_vstream",
audio_stream = "front_astream",
front_astream = capture_front_astream_vtop_0c76_161e,
hdmi0_screenshot = capture_screenshot_ffmpeg_v4l,
hdmi0_vstream = capture_vstream_ffmpeg_v4l,
hdmi0_astream = capture_astream_ffmpeg_v4l,
)
)
This assumes we have connected and configured:
- an HDMI grabber to the target’s HDMI0 output (see
setup instructions
) - an audio grabber to the front audio output (see
setup instructions
).
to create multiple capture capabilityies (video and sound streams, and screenshots) with specific names for the ouputs and aliases)
Note the audio capturers are many times HW specific because they expose different audio controls that have to be set or queried.
-
conf_00_lib_capture.
capture_screenshot_ffmpeg_v4l
= <ttbl.capture.generic_snapshot object>¶ A capturer to take screenshots from a v4l device using ffmpeg
Note the fields are target’s tags and others specified in
ttbl.capture.generic_snapshot
andttbl.capture.generic_stream
.To use:
define a target
physically connect the capture interface to it and to the server
Create a udev configuration so the capture device exposes itself as /dev/video-TARGETNAME-INDEX.
This requires creating a udev configuration so that the v4l device gets recognized and an alias created, which can be accomplished by dropping a udev rule in /etc/udev/rules.d such as:
SUBSYSTEM == "video4linux", ACTION == "add", \ KERNEL=="video*", \ ENV{ID_SERIAL_SHORT} == "SOMESERIALNUMBER", \ SYMLINK += "video-nuc-01A-$attr{index}"
note some USB devices don’t offer a serial number, then you can use a device path, such as:
ENV{ID_PATH} == "pci-0000:00:14.0-usb-0:2.1:1.0", \
this is shall be a last resort, as then moving cables to different USB ports will change the paths and you will have to reconfigure.
add the configuration snippet:
ttbl.config.targets[TARGETNAME].interface_add( "capture", ttbl.capture.interface( screen = "hdmi0_screenshot", screen_stream = "hdmi0_vstream", hdmi0_screenshot = capture_screenshot_ffmpeg_v4l, hdmi0_vstream = capture_vstream_ffmpeg_v4l, ))
Note in this case we have used an
This has tested with with:
https://www.agptek.com/AGPTEK-USB-3-0-HDMI-HD-Video-Capture-1089-212-1.html
Which shows in USB as:
3-2.2.4 1bcf:2c99 ef 3.10 5000MBit/s 512mA 4IFs (VXIS Inc ezcap U3 capture) 3-2.2.4:1.2 (IF) 01:01:00 0EPs (Audio:Control Device) snd-usb-audio sound/card5 3-2.2.4:1.0 (IF) 0e:01:00 1EP (Video:Video Control) uvcvideo video4linux/video5 video4linux/video4 input/input15 3-2.2.4:1.3 (IF) 01:02:00 0EPs (Audio:Streaming) snd-usb-audio 3-2.2.4:1.1 (IF) 0e:02:00 1EP (Video:Video Streaming) uvcvideo
Note this also can be used to capture video of the HDMI stream using capture_vstream_ffmpeg_v4l and audio played over HDMI via an exposed ALSA interface (see capture_astream_ffmpeg_v4l below).
-
conf_00_lib_capture.
capture_screenshot_vnc
= <ttbl.capture.generic_snapshot object>¶ A capturer to take screenshots from VNC
Note the fields are target’s tags and others specified in
ttbl.capture.generic_snapshot
andttbl.capture.generic_stream
.
-
conf_00_lib_capture.
capture_vstream_ffmpeg_v4l
= <ttbl.capture.generic_stream object>¶ Capture video off a v4l device using ffmpeg
See capture_screenshot_ffmpeg_v4l for setup instructions, as they are common.
-
conf_00_lib_capture.
capture_astream_ffmpeg_v4l
= <ttbl.capture.generic_stream object>¶ Capture audio off an Alsa device using ffmpeg
See capture_screenshot_ffmpeg_v4l for setup instructions, as they are similar.
Note the udev setup instructions for Alsa devices are slightly different; instead of SYMLINKS we have to set ATTR{id}:
SUBSYSTEM == "sound", ACTION == "add", \ ENV{ID_PATH} == "pci-0000:00:14.0-usb-0:2.1:1.2", \ ATTR{id} = "TARGETNAME"
Once this configuration is completed, udev is reloaded (sudo udevadm control –reload-rules) and the device is triggered (with udevadm trigger /dev/snd/controlCX or the machine restarted), /proc/asound should contain a symlink to the actual card:
$ ls /proc/asound/ -l total 0 dr-xr-xr-x. 3 root root 0 Jun 21 21:52 card0 dr-xr-xr-x. 7 root root 0 Jun 21 21:52 card4 .. lrwxrwxrwx. 1 root root 5 Jun 21 21:52 TARGETNAME -> card4 ...
Device information for Alsa devices (Card 0, Card 1, etc…) can be found with:
$ udevadm info /dev/snd/controlC0 P: /devices/pci0000:00/0000:00:1f.3/sound/card0/controlC0 N: snd/controlC0 S: snd/by-path/pci-0000:00:1f.3 E: DEVLINKS=/dev/snd/by-path/pci-0000:00:1f.3 E: DEVNAME=/dev/snd/controlC0 E: DEVPATH=/devices/pci0000:00/0000:00:1f.3/sound/card0/controlC0 E: ID_PATH=pci-0000:00:1f.3 E: ID_PATH_TAG=pci-0000_00_1f_3 E: MAJOR=116 E: MINOR=11 E: SUBSYSTEM=sound E: TAGS=:uaccess: E: USEC_INITIALIZED=30391111
As indicated in capture_screenshot_ffmpeg_v4l, using ENV{ID_SERIAL_SHORT} is preferred if available.
-
conf_00_lib_capture.
capture_agptek_hdmi_astream
= <ttbl.capture.generic_stream object>¶ Capture HDMI Audio from an AGPTEK USB 3.0 HDMI HD Video Capture
We can’t use a generic ALSA capturer because there seem to be glitches in the device
-
conf_00_lib_capture.
capture_front_astream_vtop_0c76_161e
= <ttbl.capture.generic_stream object>¶ Capture audio with the USB capturer VTOP/JMTEK 0c76:161e
https://www.amazon.com/Digital-Audio-Capture-Windows-10-11/dp/B019T9KS04
This is for capturing audio on the audio grabber connected to the main builtin sound output of the target (usually identified as front by the Linux driver subsystem), which UDEV has configured to be called TARGETNAME-front:
SUBSYSTEM == "sound", ACTION == "add", \ ENV{ID_PATH} == "pci-0000:00:14.0-usb-0:2.3.1:1.0", \ ATTR{id} = "TARGETNAME-front"
8.6.2. Configuration API for MCUs used with the Zephyr OS and others¶
-
conf_00_lib_mcu.
arduino101_add
(name=None, fs2_serial=None, serial_port=None, ykush_url=None, ykush_serial=None, variant=None, openocd_path='/opt/zephyr-sdk-0.10.0/sysroots/x86_64-pokysdk-linux/usr/bin/openocd', openocd_scripts='/opt/zephyr-sdk-0.10.0/sysroots/x86_64-pokysdk-linux/usr/share/openocd/scripts', debug=False, build_only=False)¶ Configure an Arduino 101 for the fixture described below
This Arduino101 fixture includes a Flyswatter2 JTAG which allows flashing, debugging and a YKUSH power switch for power control.
Add to a server configuration file:
arduino101_add( name = "arduino101-NN", fs2_serial = "arduino101-NN-fs2", serial_port = "/dev/tty-arduino101-NN", ykush_url = "http://USER:PASSWORD@HOST/SOCKET", ykush_serial = "YKXXXXX")
restart the server and it yields:
$ tcf list local/arduino101-NN
Parameters: - name (str) – name of the target
- fs2_serial (str) – USB serial number for the FlySwatter2 (defaults to TARGETNAME-fs2
- serial_port (str) – name of the serial port (defaults to
/dev/tty-TARGETNAME
) - ykush_serial (str) – USB serial number of the YKUSH hub.
- ykush_url (str) –
(optional) URL for the DLWPS7 power controller to the YKUSH. If None, the YKUSH is considered always on. See
conf_00_lib_pdu.dlwps7_add()
.FIXME: take a PC object so something different than a DLWPS7 can be used.
Overview
To power on the target, first we power the YKUSH, then the Flyswatter, then the serial port and then the board itself. And thus we need to wait for each part to correctly show up in the system after we power it up (or power off). Then the system starts OpenOCD to connect it (via the JTAG) to the board.
Powering on/off the YKUSH is optional, but highly recommended.
See the rationale for this complicated setup.
Bill of materials
- an available port on a DLWPS7 power switch (optional)
- a Yepkit YKUSH power-switching hub (see bill of materials in
conf_00_lib_pdu.ykush_targets_add()
- an Arduino101 (note it must have original firmware; if you need to reset it, follow these instructions).
- a USB A-Male to B-female for power to the Arduino 101
- a USB-to-TTL serial cable for the console (power)
- three M/M jumper cables
- A Flyswatter2 for flashing and debugging
- Flash a new serial number on the Flyswatter2 following the instructions.
- a USB A-Male to B-female for connecting the Flyswatter to the YKush (power and data)
- An ARM-JTAG 20-10 adapter miniboard and flat ribbon cable (https://www.olimex.com/Products/ARM/JTAG/ARM-JTAG-20-10/) to connect the JTAG to the Arduino101’s jtag port.
Connecting the test target fixture
- connect the Arduino’s USB port to the YKUSH downstream port 3
- Flyswatter2 JTAG:
connect the USB port to the YKUSH downstream port 1
flash a new serial number on the Flyswatter2 following the instructions.
This is needed to distinguish multiple Flyswatter2 JTAGs connected in the same system, as they all come flashed with the same number (FS20000).
connect the ARM-JTAG 20-10 adapter cable to the FlySwatter2 and to the Arduino101.
Note the flat ribbon cable has to be properly aligned; the red cable indicates pin #1. The board connectors might have a dot, a number 1 or some sort of marking indicating where pin #1 is.
If your ribbon cable has no red cable, just choose one end as one and align it on both boards to be pin #1.
- connect the USB-to-TTY serial adapter to the YKUSH downstream port 2
- connect the USB-to-TTY serial adapter to the Arduino 101 with the
M/M jumper cables:
- USB FTDI Black (ground) to Arduino101’s serial ground pin
- USB FTDI White (RX) to the Arduino101’ TX
- USB FTDI Green (TX) to Arduino101’s RX.
- USB FTDI Red (power) is left open, it has 5V.
- connect the YKUSH to the server system and to power as
described in
conf_00_lib_pdu.ykush_targets_add()
Configuring the system for the fixture
- Choose a name for the target: arduino101-NN (where NN is a number)
- Find the YKUSH’s serial number YKNNNNN [plug it and run dmesg
for a quick find], see
conf_00_lib_pdu.ykush_targets_add()
. - Configure udev to add a name for the serial device that
represents the USB-to-TTY dongle connected to the target so we can
easily find it at
/dev/tty-TARGETNAME
. Different options for USB-to-TTY dongles with or without a USB serial number.
-
conf_00_lib_mcu.
a101_dfu_add
(name, serial_number, ykush_serial, ykush_port_board, ykush_port_serial=None, serial_port=None)¶ Configure an Arduino 101
This is an Arduino101 fixture that uses an YKUSH hub for power control, with or without a serial port (via external USB-to-TTY serial adapter) and requires no JTAG, using DFU mode for flashing. It allows flashing the BLE core.
Add to a server configuration file (eg:
/etc/ttbd-production/conf_10_targets.py:
):a101_dfu_add("a101-NN", "SERIALNUMBER", "YKNNNNN", PORTNUMBER, [ykush_port_serial = PORTNUMBER2,] [serial_port = "/dev/tty-a101-NN"])
restart the server and it yields:
$ tcf list local/arduino101-NN
Parameters: - name (str) – name of the target
- serial_number (str) – USB serial number for the Arduino 101
- ykush_serial (str) – USB serial number of the YKUSH hub used for power control
- ykush_port_board (int) – number of the YKUSH downstream port where the board is connected.
- ykush_port_serial (int) – (optional) number of the YKUSH downstream port where the board’s serial port is connected. If not specified, it will be considered there is no serial port.
- serial_port (str) – (optional) name of the serial port
(defaults to
/dev/tty-NAME
)
Overview
The Arduino 101 is powered via the USB connector. The Arduino 101 does not export a serial port over the USB connector–applications loaded onto it might create a USB serial port, but this is not necessarily so all the time.
Thus, for ease of use this fixture connects an optional external USB-to-TTY dongle to the TX/RX/GND lines of the Arduino 101 that allows a reliable serial console to be present.
When the serial dongle is in use, the power rail needs to first power up the serial dongle and then the board.
Per this rationale, current leakage and full power down needs necesitate of this setup to cut all power to all cables connected to the board (power and serial).
This fixture uses
ttbl.tt.tt_dfu
to implement the target; refer to it for implementation details.Bill of materials
- two available ports on an YKUSH power switching hub (serial YKNNNNN); only one if the serial console will not be used.
- an Arduino 101 board
- a USB A-Male to micro-B male cable (for board power)
- (optional) a USB-to-TTY serial port dongle
- (optional) three M/M jumper cables
Connecting the test target fixture
- (if not yet connected), connect the YKUSH to the server system
and to power as described in
conf_00_lib_pdu.ykush_targets_add()
- connect the Arduino 101’s USB port to the YKUSH downstream port PORTNUMBER
- (if a serial console will be connected) connect the USB-to-TTY serial adapter to the YKUSH downstream port PORTNUMBER2
- (if a serial console will be connected) connect the USB-to-TTY
serial adapter to the Arduino 101 with the M/M jumper cables:
- USB FTDI Black (ground) to Arduino 101’s serial ground pin (fourth pin from the bottom)
- USB FTDI White (RX) to the Arduino 101’s TX.
- USB FTDI Green (TX) to Arduino 101’s RX.
- USB FTDI Red (power) is left open, it has 5V.
Configuring the system for the fixture
Choose a name for the target: a101-NN (where NN is a number)
(if needed) Find the YKUSH’s serial number YKNNNNN [plug it and run dmesg for a quick find], see
conf_00_lib_pdu.ykush_targets_add()
.Find the board’s serial number.
Note these boards, when freshly plugged in, will only stay in DFU mode for five seconds and then boot Zephyr (or whichever OS they have), so the USB device will dissapear. You need to run the lsusb or whichever command you are using quick (or monitor the kernel output with dmesg -w).
Configure udev to add a name for the serial device that represents the USB-to-TTY dongle connected to the target so we can easily find it at
/dev/tty-a101-NN
. Different options for USB-to-TTY dongles with or without a USB serial number.Add to the configuration file (eg:
/etc/ttbd-production/conf_10_targets.py
):a101_dfu_add("a101-NN", "SERIALNUMBER", "YKNNNNN", PORTNUMBER, ykush_port_serial = PORTNUMBER2, serial_port = "/dev/tty-a101-NN")
-
conf_00_lib_mcu.
arduino2_add
(name, usb_serial_number, serial_port=None, ykush_serial=None, ykush_port_board=None)¶ Configure an Arduino Due board for the fixture described below
The Arduino Due an ARM-based development board. Includes a builtin flasher that requires the bossac tool. Single wire is used for flashing, serial console and power.
Add to a server configuration file:
arduino2_add(name = "arduino2-NN", usb_serial_number = "SERIALNUMBER", serial_port = "/dev/tty-arduino2-NN", ykush_serial = "YKXXXXX", ykush_port_board = N)
restart the server and it yields:
$ tcf list local/arduino2-NN
Parameters: - name (str) – name of the target
- serial_number (str) – USB serial number for the board
- serial_port (str) – name of the serial port (defaults to /dev/tty-TARGETNAME).
- ykush_serial (str) – USB serial number of the YKUSH hub
- ykush_port_board (int) – number of the YKUSH downstream port where the board power is connected.
Overview
Per this rationale, current leakage and full power down needs necesitate of this setup to cut all power to all cables connected to the board (power and serial).
Bill of materials
- an Arduino Due board
- a USB A-Male to micro-B male cable (for board power, flashing and console)
- one available port on an YKUSH power switching hub (serial YKNNNNN)
Connecting the test target fixture
- connect the Arduino Due’s OpenSDA (?) port with the USB A-male to B-micro to YKUSH downstream port N
- connect the YKUSH to the server system and to power as
described in
conf_00_lib_pdu.ykush_targets_add()
Configuring the system for the fixture
- Choose a name for the target: arduino2-NN (where NN is a number)
- (if needed) Find the YKUSH’s serial number YKNNNNN [plug it and
run dmesg for a quick find], see
conf_00_lib_pdu.ykush_targets_add()
. - Find the board’s serial number
- Configure udev to add a name for the serial device for the
board’s serial console so it can be easily found at
/dev/tty-TARGETNAME
. Follow these instructions using the boards’ serial number.
-
conf_00_lib_mcu.
emsk_add
(name=None, serial_number=None, serial_port=None, brick_url=None, ykush_serial=None, ykush_port=None, openocd_path='/opt/zephyr-sdk-0.10.0/sysroots/x86_64-pokysdk-linux/usr/bin/openocd', openocd_scripts='/opt/zephyr-sdk-0.10.0/sysroots/x86_64-pokysdk-linux/usr/share/openocd/scripts', debug=False, model=None)¶ Configure a Synposis EM Starter Kit (EMSK) board configured for a EM* SOC architecture, with a power brick and a YKUSH USB port providing power control.
The board includes a builting JTAG which allows flashing, debugging; it only requires one upstream connection to a YKUSH power-switching hub for power, serial console and JTAG.
Add to a server configuration file:
emsk_add(name = "emsk-NN", serial_number = "SERIALNUMBER", ykush_serial = "YKXXXXX", ykush_port_board = N, model = "emsk7d")
restart the server and it yields:
$ tcf list local/emsk-NN
Parameters: - name (str) – name of the target
- serial_number (str) – USB serial number for the board
- serial_port (str) – name of the serial port (defaults to /dev/tty-TARGETNAME).
- ykush_serial (str) – USB serial number of the YKUSH hub
- ykush_port_board (int) – number of the YKUSH downstream port where the board power is connected.
- brick_url (str) – URL for the power switch to which the EMSK’s power brick is connected (this assumes for now you are using a DLWPS7 for power, so the url witll be in the form http://user:password@hostname/port.
- model (str) –
SOC model configured in the board with the blue DIP switches (from emsk7d [default], emsk9d, emsk11d).
DIP1 DIP2 DIP3 DIP4 Model off off em7d on off em9d off on em11d (on means DIP down, towards the board)
Overview
Per this rationale, current leakage and full power down needs necesitate of this setup to cut all power to all cables connected to the board (power and serial).
Bill of materials
- a EM Starter Kit board and its power brick
- a USB A-Male to micro-B male cable (for board power, flashing and console)
- one available port on a switchable power hub
- one available port on an YKUSH power switching hub (serial YKNNNNN)
Connecting the test target fixture
- connect the EMSK’s micro USB port with the USB A-male to B-micro to YKUSH downstream port N
- connect the YKUSH to the server system and to power as
described in
conf_00_lib_pdu.ykush_targets_add()
- Connect the power brick to the EMSK’s power barrel
- Connect the power brick to the available power in the power switch
Configuring the system for the fixture
- Choose a name for the target: emsk-NN (where NN is a number)
- (if needed) Find the YKUSH’s serial number YKNNNNN [plug it and
run dmesg for a quick find], see
conf_00_lib_pdu.ykush_targets_add()
. - Find the board’s serial number
- Configure udev to add a name for the serial device for the
board’s serial console so it can be easily found at
/dev/tty-TARGETNAME
. Follow these instructions using the boards’ serial number.
-
conf_00_lib_mcu.
esp32_add
(name, usb_serial_number=None, ykush_serial=None, ykush_port_board=None, serial_port=None)¶ Configure an ESP-32 MCU board
The ESP-32 is an Tensillica based MCU, implementing two Xtensa CPUs. This fixture uses an YKUSH hub for power control with a serial power over the USB cable which is also used to flash using
esptool.py
from the ESP-IDF framework.See instructions in
ttbl.tt.tt_esp32
to install and configure prerequisites in the server.Add to a server configuration file (eg:
/etc/ttbd-production/conf_10_targets.py:
):esp32_add("esp32-NN", "SERIALNUMBER", "YKNNNNN", PORTNUMBER)
restart the server and it yields:
$ tcf list local/esp32-NN
Parameters: - name (str) – name of the target
- usb_serial_number (str) – (optional) USB serial number for the esp32; defaults to same as the target
- ykush_serial (str) – USB serial number of the YKUSH hub used for power control
- ykush_port_board (int) – number of the YKUSH downstream port where the board is connected.
- serial_port (str) – (optional) name of the serial port
(defaults to
/dev/tty-NAME
)
Overview
The ESP32 offers the same USB connector for serial port and flashing.
Bill of materials
- one available port on an YKUSH power switching hub (serial YKNNNNN)
- an ESP32 board
- a USB A-Male to micro-B male cable
Connecting the test target fixture
- (if not yet connected), connect the YKUSH to the server system
and to power as described in
conf_00_lib_pdu.ykush_targets_add()
- connect the esp32’s USB port to the YKUSH downstream port PORTNUMBER
Configuring the system for the fixture
See instructions in
ttbl.tt.tt_esp32
to install and configure prerequisites in the server.Choose a name for the target: esp32-NN (where NN is a number)
(if needed) Find the YKUSH’s serial number YKNNNNN [plug it and run dmesg for a quick find], see
conf_00_lib_pdu.ykush_targets_add()
.Find the board’s serial number.
Note these boards usually have a serial number of 001; it can be updated easily to a unique serial number following these steps.
Configure udev to add a name for the serial device for the board’s serial console so it can be easily found at
/dev/tty-TARGETNAME
. Follow these instructions using the boards’ serial number.
-
conf_00_lib_mcu.
frdm_add
(name=None, serial_number=None, serial_port=None, ykush_serial=None, ykush_port_board=None, openocd_path='/usr/bin/openocd', openocd_scripts='/usr/share/openocd/scripts', debug=False)¶ Configure a FRDM board for the fixture described below
The FRDM k64f is an ARM-based development board. Includes a builting JTAG which allows flashing, debugging; it only requires one upstream connection to a YKUSH power-switching hub for power, serial console and JTAG.
Add to a server configuration file:
frdm_add(name = "frdm-NN", serial_number = "SERIALNUMBER", serial_port = "/dev/tty-frdm-NN", ykush_serial = "YKXXXXX", ykush_port_board = N)
restart the server and it yields:
$ tcf list local/frdm-NN
Parameters: - name (str) – name of the target
- serial_number (str) – USB serial number for the FRDM board
- serial_port (str) – name of the serial port [FIXME: default to /dev/tty-TARGETNAME]
- ykush_serial (str) – USB serial number of the YKUSH hub
- ykush_port_board (int) – number of the YKUSH downstream port where the board power is connected.
Overview
Per this rationale, current leakage and full power down needs necesitate of this setup to cut all power to all cables connected to the board (power and serial).
Bill of materials
- a FRDM k64f board
- a USB A-Male to micro-B male cable (for board power, JTAG and console)
- one available port on an YKUSH power switching hub (serial YKNNNNN)
Connecting the test target fixture
- connect the FRDM’s OpenSDA port with the USB A-male to B-micro to YKUSH downstream port N
- connect the YKUSH to the server system and to power as
described in
conf_00_lib_pdu.ykush_targets_add()
Configuring the system for the fixture
- Choose a name for the target: frdm-NN (where NN is a number)
- (if needed) Find the YKUSH’s serial number YKNNNNN [plug it and
run dmesg for a quick find], see
conf_00_lib_pdu.ykush_targets_add()
. - Find the board’s serial number
- Configure udev to add a name for the serial device for the
board’s serial console so it can be easily found at
/dev/tty-TARGETNAME
. Follow these instructions using the boards’ serial number.
Warning
Ugly magic here. The FRDMs sometimes boot into some bootloader upload mode (with a different USB serial number) from which the only way to get them out is by power-cycling it.
So the power rail for this thing is set with a Power Controller object that does the power cycle itself (pc_board) and then another that looks for a USB device with the right serial number (serial_number). If it fails to find it, it executes an action and waits for it to show up. The action is power cycling the USB device with the pc_board power controller. Lastly, in the power rail, we have the glue that opens the serial ports to the device and the flasher object that start/stops OpenOCD.
Yup, I dislike computers too.
-
conf_00_lib_mcu.
ma_add
(name=None, serial_number=None, serial_port=None, ykush_serial=None, ykush_port_board=None, openocd_path='/opt/zephyr-sdk-0.10.0/sysroots/x86_64-pokysdk-linux/usr/bin/openocd', openocd_scripts='/opt/zephyr-sdk-0.10.0/sysroots/x86_64-pokysdk-linux/usr/share/openocd/scripts', debug=False)¶
-
conf_00_lib_mcu.
mv_add
(name=None, fs2_serial=None, serial_port=None, ykush_serial=None, ykush_port_board=None, ykush_port_serial=None, openocd_path='/opt/zephyr-sdk-0.10.0/sysroots/x86_64-pokysdk-linux/usr/bin/openocd', openocd_scripts='/opt/zephyr-sdk-0.10.0/sysroots/x86_64-pokysdk-linux/usr/share/openocd/scripts', debug=False)¶ Configure a Quark D2000 for the fixture described below.
The Quark D2000 development board includes a Flyswatter2 JTAG which allows flashing, debugging; it requires two upstream connections to a YKUSH power-switching hub for power and JTAG and another for serial console.
Add to a server configuration file:
mv_add(name = "mv-NN", fs2_serial = "mv-NN-fs2", serial_port = "/dev/tty-mv-NN", ykush_serial = "YKXXXXX", ykush_port_board = N1, ykush_port_serial = N2)
restart the server and it yields:
$ tcf list local/mv-NN
Parameters: - name (str) – name of the target
- fs2_serial (str) – USB serial number for the FlySwatter2 (should be TARGETNAME-fs2 [FIXME: default to that]
- serial_port (str) – name of the serial port [FIXME: default to /dev/tty-TARGETNAME]
- ykush_serial (str) – USB serial number of the YKUSH hub
- ykush_port_board (int) – number of the YKUSH downstream port where the board power is connected.
- ykush_port_serial (int) – number of the YKUSH downstream port where the board’s serial port is connected.
Overview
The Quark D2000 board comes with a builtin JTAG / Flyswatter, whose port can be programmed. The serial port is externally provided via a USB-to-TTY dongle.
However, because of this, to power the test target up the power rail needs to first power up the serial dongle and then the board. There is a delay until the internal JTAG device we can access it, thus we need a delay before the system starts OpenOCD to connect it (via the JTAG) to the board.
Per this rationale, current leakage and full power down needs necesitate of this setup to cut all power to all cables connected to the board (power and serial).
Bill of materials
- two available ports on an YKUSH power switching hub (serial YKNNNNN)
- a Quark D2000 reference board
- a USB A-Male to micro-B male cable (for board power)
- a USB-to-TTY serial port dongle
- three M/M jumper cables
Connecting the test target fixture
- connect the Quark D2000’s USB-ATP port with the USB A-male to B-micro to YKUSH downstream port N1 for powering the board
- connect the USB-to-TTY serial adapter to the YKUSH downstream port N2
- connect the USB-to-TTY serial adapter to the Quark D2000 with the
M/M jumper cables:
- USB FTDI Black (ground) to board’s serial ground pin
- USB FTDI White (RX) to the board’s serial TX ping
- USB FTDI Green (TX) to board’s serial RX pin
- USB FTDI Red (power) is left open, it has 5V.
- connect the YKUSH to the server system and to power as
described in
conf_00_lib_pdu.ykush_targets_add()
Configuring the system for the fixture
- Choose a name for the target: mv-NN (where NN is a number)
- (if needed) Find the YKUSH’s serial number YKNNNNN [plug it and
run dmesg for a quick find], see
conf_00_lib_pdu.ykush_targets_add()
. - Flash a new serial number on the Flyswatter2 following the instructions.
- Configure udev to add a name for the serial device that
represents the USB-to-TTY dongle connected to the target so we can
easily find it at
/dev/tty-TARGETNAME
. Different options for USB-to-TTY dongles with or without a USB serial number.
- Ensure the board is flashed with the Quark D2000 ROM (as described here).
-
conf_00_lib_mcu.
nios2_max10_add
(name, device_id, serial_port_serial_number, pc_board, serial_port=None)¶ Configure an Altera MAX10 NIOS-II
The Altera MAX10 is used to implement a NIOS-II CPU; it has a serial port, JTAG for flashing and power control.
The USB serial port is based on a FTDI chipset with a serial number, so it requires no modification. However, the JTAG connector has no serial number and can be addressed only path.
Add to a server configuration file:
nios2_max10_add("max10-NN", "CABLEID", "SERIALNUMBER", ttbl.pc.dlwps7("http://admin:1234@HOST/PORT"))
restart the server and it yields:
$ tcf list local/max10-NN
Parameters: - name (str) – name of the target
- cableid (str) –
identification of the JTAG for the board; this can be determined using the jtagconfig tool from the Quartus Programming Tools; make sure only a single board is connected to the system and powered on and run:
$ jtagconfig 1) USB-BlasterII [2-2.1] 031050DD 10M50DA(.|ES)/10M50DC
Note USB-BlasterII [2-2.1] is the cable ID for said board.
Warning
this cable ID is path dependent. Moving any of the USB cables (including the upstream hubs), including changing the ports to which the cables are connected, will change the cableid and will require re-configuration.
- serial_number (str) – USB serial number for the serial port of the MAX10 board.
- serial_port (str) – name of the serial port [defaults to /dev/tty-TARGETNAME]
- pc (ttbl.power.impl_c) – power controller to switch on/off the MAX10 board.
Bill of materials
- Altera MAX10 reference board
- Altera MAX10 power brick
- a USB A-Male to mini-B male cable (for JTAG)
- a USB A-Male to mini-B male cable (for UART)
- an available power socket in a power controller like the
Digital Loggers Web Power Switch
- two USB ports leading to the server
Connecting the test target fixture
- connect the power brick to the MAX10 board
- connect the power plug to port N of the power controller POWERCONTROLLER
- connect a USB cable to the UART connector in the MAX10; connect to the server
- connect a USB cable to the JTAG connector in the MAX10; connect to the server
- ensure the DIP SW2 (back of board) are all OFF except for 3 that has to be on and that J7 (front of board next to coaxial connectors) is open.
Configuring the system for the fixture
Ensure the system is setup for MAX10 boards:
- Setup
ttbl.tt.tt_max10.quartus_path
- Setup
ttbl.tt.tt_max10.input_sof
- Setup
Choose a name for the target: max10-NN (where NN is a number)
Configure udev to add a name for the serial device for the board’s serial console so it can be easily found at
/dev/tty-TARGETNAME
. Follow these instructions using the boards’ serial number; e.g.:SUBSYSTEM == "tty", ENV{ID_SERIAL_SHORT} == "AC0054PT", SYMLINK += "tty-max10-46"
-
conf_00_lib_mcu.
nrf5x_add
(name, serial_number, family, serial_port=None, ykush_serial=None, ykush_port_board=None, openocd_path='/usr/bin/openocd', openocd_scripts='/usr/share/openocd/scripts', debug=False)¶ Configure a NRF51 board for the fixture described below
The NRF51 is an ARM M0-based development board. Includes a builting JTAG which allows flashing, debugging; it only requires one upstream connection to a YKUSH power-switching hub for power, serial console and JTAG.
Add to a server configuration file:
nrf5x_add(name = "nrf51-NN", serial_number = "SERIALNUMBER", ykush_serial = "YKXXXXX", ykush_port_board = N)
restart the server and it yields:
$ tcf list local/nrf51-NN
Parameters: - name (str) – name of the target
- serial_number (str) – USB serial number for the board
- family (str) – Family of the board (nrf51_blenano, nrf51_pca10028, nrf52840_pca10056, nrf52_blenano2, nrf52_pca10040)
- serial_port (str) – (optional) name of the serial port, which
defaults to
/dev/tty-TARGETNAME
. - ykush_serial (str) – USB serial number of the YKUSH hub
- ykush_port_board (int) – number of the YKUSH downstream port where the board power is connected.
Overview
Per this rationale, current leakage and full power down needs necesitate of this setup to cut all power to all cables connected to the board (power and serial).
Bill of materials
- a nrf51 board
- a USB A-Male to micro-B male cable (for board power, JTAG and console)
- one available port on an YKUSH power switching hub (serial YKNNNNN)
Connecting the test target fixture
- connect the FRDM’s USB port with the USB A-male to B-micro to YKUSH downstream port N
- ensure the battery is disconnected
- connect the YKUSH to the server system and to power as
described in
conf_00_lib_pdu.ykush_targets_add()
Configuring the system for the fixture
- Choose a name for the target: nrf51-NN (where NN is a number)
- (if needed) Find the YKUSH’s serial number YKNNNNN [plug it and
run dmesg for a quick find], see
conf_00_lib_pdu.ykush_targets_add()
. - Find the board’s serial number
- Configure udev to add a name for the serial device for the
board’s serial console so it can be easily found at
/dev/tty-TARGETNAME
. Follow these instructions using the boards’ serial number.
-
conf_00_lib_mcu.
nucleo_add
(name=None, serial_number=None, serial_port=None, ykush_serial=None, ykush_port_board=None, openocd_path='/usr/bin/openocd', openocd_scripts='/usr/share/openocd/scripts', debug=False)¶ Configure an Nucleo F10 board
This is a backwards compatiblity function, please use
stm32_add()
.
-
conf_00_lib_mcu.
quark_c1000_add
(name=None, serial_number=None, serial_port=None, ykush_serial=None, ykush_port_board=None, openocd_path='/opt/zephyr-sdk-0.10.0/sysroots/x86_64-pokysdk-linux/usr/bin/openocd', openocd_scripts='/opt/zephyr-sdk-0.10.0/sysroots/x86_64-pokysdk-linux/usr/share/openocd/scripts', debug=False, variant='qc10000_crb', target_type='ma')¶ Configure a Quark C1000 for the fixture described below
The Quark C1000 development board has a built-in JTAG which allows flashing, debugging, thus it only requires an upstream connection to a YKUSH power-switching hub for power, serial console and JTAG.
This board has a USB serial number and should not require any flashing of the USB descriptors for setup.
Add to a server configuration file:
quark_c1000_add(name = "qc10000-NN", serial_number = "SERIALNUMBER", ykush_serial = "YKXXXXX", ykush_port_board = N)
restart the server and it yields:
$ tcf list local/qc10000-NN
earlier versions of these boards can be added with the ma_add() and ah_add() versions of this function.
Parameters: - name (str) – name of the target
- serial_number (str) – USB serial number for the Quark C1000 board
- serial_port (str) – name of the serial port [FIXME: default to /dev/tty-TARGETNAME]
- ykush_serial (str) – USB serial number of the YKUSH hub
- ykush_port_board (int) – number of the YKUSH downstream port where the board power is connected.
- variant (str) – variant of ROM version and address map as defined in (FIXME) flasher configuration.
Overview
Per this rationale, current leakage and full power down needs necesitate of this setup to cut all power to all cables connected to the board (power and serial).
Bill of materials
- one available port on an YKUSH power switching hub (serial YKNNNNN)
- a Quark C1000 reference board
- a USB A-Male to micro-B male cable (for board power, JTAG and console)
Connecting the test target fixture
- connect the Quark C1000’s FTD_USB port with the USB A-male to B-micro to YKUSH downstream port N
- connect the YKUSH to the server system and to power as
described in
conf_00_lib_pdu.ykush_targets_add()
Configuring the system for the fixture
Choose a name for the target: qc10000-NN (where NN is a number)
(if needed) Find the YKUSH’s serial number YKNNNNN [plug it and run dmesg for a quick find], see
conf_00_lib_pdu.ykush_targets_add()
.Find the board’s serial number
Configure udev to add a name for the serial device for the board’s serial console so it can be easily found at
/dev/tty-TARGETNAME
. Follow these instructions using the boards’ serial number.Note, however that these boards might present two serial ports to the system, one of which later converts to another interface. So in order to avoid configuration issues, the right port has to be explicitly specified with ENV{ID_PATH} == “*:1.1”:
# Force second interface, first is for JTAG/update SUBSYSTEM == "tty", ENV{ID_SERIAL_SHORT} == "IN0521621", ENV{ID_PATH} == "*:1.1", SYMLINK += "tty-TARGETNAME"
-
conf_00_lib_mcu.
target_qemu_zephyr_desc
= {'arm': {'cmdline': ['/opt/zephyr-sdk-0.10.0/sysroots/x86_64-pokysdk-linux/usr/bin/qemu-system-arm', '-cpu', 'cortex-m3', '-machine', 'lm3s6965evb', '-nographic', '-vga', 'none'], 'zephyr_board': 'qemu_cortex_m3'}, 'nios2': {'cmdline': ['/opt/zephyr-sdk-0.10.0/sysroots/x86_64-pokysdk-linux/usr/bin/qemu-system-nios2', '-machine', 'altera_10m50_zephyr', '-nographic']}, 'riscv32': {'cmdline': ['/opt/zephyr-sdk-0.10.0/sysroots/x86_64-pokysdk-linux/usr/bin/qemu-system-riscv32', '-nographic', '-machine', 'sifive_e']}, 'x86': {'cmdline': ['/opt/zephyr-sdk-0.10.0/sysroots/x86_64-pokysdk-linux/usr/bin/qemu-system-i386', '-m', '8', '-cpu', 'qemu32,+nx,+pae', '-device', 'isa-debug-exit,iobase=0xf4,iosize=0x04', '-nographic', '-no-acpi']}, 'x86_64': {'cmdline': ['/opt/zephyr-sdk-0.10.0/sysroots/x86_64-pokysdk-linux/usr/bin/qemu-system-x86_64', '-nographic']}, 'xtensa': {'cmdline': ['/opt/zephyr-sdk-0.10.0/sysroots/x86_64-pokysdk-linux/usr/bin/qemu-system-xtensa', '-machine', 'sim', '-semihosting', '-nographic', '-cpu', 'sample_controller']}}¶ QEMU Zephyr target descriptors
Dictionary describing the supported BSPs for QEMU targets and what Zephyr board and commandline they map to.
The key is the TCF BSP which maps to the binary qemu-system-BSP. If the field zephyr_board is present, it refers to how that BSP is known to the Zephyr OS.
New entries can be added with:
>>> target_qemu_zephyr_desc['NEWBSP'] = dict( >>> cmdline = [ >>> '/usr/bin/qemu-system-NEWBSP', >>> 'arg1', 'arg2', ... >>> ], >>> zephyr_board = 'NEWBSPZEPHYRNAME' >>> )
-
conf_00_lib_mcu.
target_qemu_zephyr_add
(name, bsp=None, zephyr_board=None, target_type=None, nw_name=None, cmdline=None)¶ Add a QEMU target that can run the Zephyr OS.
Parameters: - name (str) – target’s name.
- bsp (str) – what architecture the target shall implement;
shall be available in
target_qemu_zephyr_desc
. - zephyr_board (str) – (optional) type of this target’s BSP for
the Zephyr OS; defaults to whatever
target_qemu_zephyr_desc
declares or BSP if none. - target_type (str) – (optional) what type the target shall declare; defaults to qz-BSP.
- nw_name (str) –
(optional) name of network/interconnect to which the target is connected. Note that the configuration code shall manually configure the network metadata as this serves only to ensure a TAP device is created before the QEMU daemon is started. E.g.:
>>> target = target_qemu_zephyr_add("qzx86-36a", 'x86', nw_name = "nwa") >>> x, y, _ = nw_indexes('a') >>> index = 36 >>> target.add_to_interconnect( # Add target to the interconnect >>> "nwa", dict( >>> mac_addr = "02:%02x:00:00:%02x:%02x" % (x, y, index), >>> ipv4_addr = '192.%d.%d.%d' % (x, y, index), >>> ipv4_prefix_len = 24, >>> ipv6_addr = 'fc00::%02x:%02x:%02x' % (x, y, index), >>> ipv6_prefix_len = 112) >>> )
- target_type – (optional) what type the target shall declare; defaults to qz-BSP.
- cmdline (str) –
(optional) command line to start this QEMU virtual machine; defaults to whatever
target_qemu_zephyr_desc
declares.Normally you do not need to set this; see
ttbl.qemu.pc
for details on the command line specification if you think you do.
-
conf_00_lib_mcu.
sam_xplained_add
(name=None, serial_number=None, serial_port=None, ykush_serial=None, ykush_port_board=None, openocd_path='/usr/bin/openocd', openocd_scripts='/usr/share/openocd/scripts', debug=False, target_type='sam_e70_xplained')¶ Configure a SAM E70/V71 boards for the fixture described below
The SAM E70/V71 xplained is an ARM-based development board. Includes a builtin JTAG which allows flashing, debugging; it only requires one upstream connection to a YKUSH power-switching hub for power, serial console and JTAG.
Add to a server configuration file:
sam_xplained_add( name = "sam-e70-NN", serial_number = "SERIALNUMBER", serial_port = "/dev/tty-same70-NN", ykush_serial = "YKXXXXX", ykush_port_board = N, target_type = "sam_e70_xplained") # or sam_v71_xplained
restart the server and it yields:
$ tcf list local/sam-e70-NN local/sam-v71-NN
Parameters: - name (str) – name of the target
- serial_number (str) – USB serial number for the SAM board
- serial_port (str) – (optional) name of the serial port (defaults to
/dev/tty-TARGETNAME
) - ykush_serial (str) – USB serial number of the YKUSH hub where it is connected to for power control.
- ykush_port_board (int) – number of the YKUSH downstream port where the board power is connected.
- target_type (str) – the target type “sam_e70_xplained” or “sam_v71_xplained”
Overview
Per this rationale, current leakage and full power down needs necesitate of this setup to cut all power to all cables connected to the board (power and serial).
Bill of materials
- a SAM E70 or V71 xplained board
- a USB A-Male to micro-B male cable (for board power, JTAG and console)
- one available port on an YKUSH power switching hub (serial YKNNNNN)
Connecting the test target fixture
Ensure the SAM E70 is properly setup:
Using Atmel’s SAM-BA In-system programmer, change the boot sequence and reset the board in case there is a bad image; this utility can be also used to recover the board in case it gets stuck.
Download from Atmel’s website (registration needed) and install.
Note
This is not open source software
Close the erase jumper erase (in SAMEv70 that’s J200 and in SAMEv71 it is J202; in both cases, it is located above the CPU when you rotate the board so you can read the CPU’s labeling in a normal orientation).
Connect the USB cable to the taget’s target USB port (the one next to the Ethernet connector) and to a USB port that is known to be powered on.
Ensure power is on by verifying the orange led lights on on the Ethernet RJ-45 connector.
Wait 10 seconds
Open the erase jumper J202 to stop erasing
Open SAM-BA 2.16
Note on Fedora 25 you need to run sam-ba_64 from the SAM-BA package.
Select which serial port is that of the SAM e70 connected to the system. Use lsusb.py -ciu to locate the tty/ttyACM device assigned to your board:
$ lsusb.py -ciu ... 2-1 03eb:6124 02 2.00 480MBit/s 100mA 2IFs (Atmel Corp. at91sam SAMBA bootloader) 2-1:1.0 (IF) 02:02:00 1EP (Communications:Abstract (modem):None) cdc_acm tty/ttyACM2 2-1:1.1 (IF) 0a:00:00 2EPs (CDC Data:) cdc_acm ...
(in this example
/dev/tty/ttyACM2
).Select board at91same70-explained, click connect.
chose the flash tab and in the scripts drop down menu, choose boot from Flash (GPNVM1) and then execute.
Exit SAM-BA
connect the SAM E70/V71’s Debug USB port with the USB A-male to B-micro to YKUSH downstream port N
connect the YKUSH to the server system and to power as described in
conf_00_lib_pdu.ykush_targets_add()
Configuring the system for the fixture
- Choose a name for the target: sam-e70-NN (where NN is a number)
- (if needed) Find the YKUSH’s serial number YKNNNNN [plug it and
run dmesg for a quick find], see
conf_00_lib_pdu.ykush_targets_add()
. - Find the board’s serial number
- Configure udev to add a name for the serial device for the
board’s serial console so it can be easily found at
/dev/tty-TARGETNAME
. Follow these instructions using the board’s serial number.
-
conf_00_lib_mcu.
simics_zephyr_cmds
= '$disk_image = "%(simics_hd0)s"\n$cpu_class = "pentium-pro"\n$text_console = TRUE\nrun-command-file "%%simics%%/targets/x86-440bx/x86-440bx-pci-system.include"\ncreate-telnet-console-comp $system.serconsole %(simics_console_port)d\nconnect system.serconsole.serial cnt1 = system.motherboard.sio.com[0]\ninstantiate-components\nsystem.serconsole.con.capture-start "%(simics_console)s"\nc\n'¶ Commmands to configure Simics to run a simulation for Zephyr by default
Fields available
via string formatting%(FIELD)L
-
conf_00_lib_mcu.
simics_zephyr_add
(name, simics_cmds='$disk_image = "%(simics_hd0)s"\n$cpu_class = "pentium-pro"\n$text_console = TRUE\nrun-command-file "%%simics%%/targets/x86-440bx/x86-440bx-pci-system.include"\ncreate-telnet-console-comp $system.serconsole %(simics_console_port)d\nconnect system.serconsole.serial cnt1 = system.motherboard.sio.com[0]\ninstantiate-components\nsystem.serconsole.con.capture-start "%(simics_console)s"\nc\n')¶ Configure a virtual Zephyr target running inside Simics
Simics is a platform simulator available from Wind River Systems; it can be used to implement a virtual machine environment that will be treated as a target.
Add to your configuration file
/etc/ttbd-production/conf_10_targets.py
:simics_zephyr_add("szNN")
restart the server and it yields:
$ tcf list local/szNN
Parameters: name (str) – name of the target (naming best practices). Overview
A Simics invocation in a standalone workspace will be created by the server to run for earch target when it is powered on. This driver currently supports only booting an ELF target and console output support (no console input or debugging). For more details, see
ttbl.tt.simics
.Note the default Simics settings for Zephyr are defined in
simics_zephyr_cmds
and you can create target which use a different Simics configuration by specifying it as a string in parameter simics_cmd.Bill of materials
Simics installed in your server machine
ttbl.tt.simics
expects a global environment variable SIMICS_BASE_PACKAGE defined to point to where Simics (and its extension packages) have been installed; e.g.:SIMICS_BASE_PACKAGE=/opt/simics/5.0/simics-5.0.136
-
conf_00_lib_mcu.
stm32_add
(name=None, serial_number=None, serial_port=None, ykush_serial=None, ykush_port_board=None, openocd_path='/usr/bin/openocd', openocd_scripts='/usr/share/openocd/scripts', model=None, zephyr_board=None, debug=False)¶ Configure an Nucleo/STM32 board
The Nucleo / STM32 are ARM-based development board. Includes a builting JTAG which allows flashing, debugging; it only requires one upstream connection to a YKUSH power-switching hub for power, serial console and JTAG.
Add to a server configuration file:
stm32_add(name = "stm32f746-67", serial_number = "066DFF575251717867114355", ykush_serial = "YK23406", ykush_port_board = 3, model = "stm32f746")
restart the server and it yields:
$ tcf list local/stm32f746-67
Parameters: - name (str) – name of the target
- serial_number (str) – USB serial number for the board
- serial_port (str) – (optional) name of the serial port (defaults to /dev/tty-TARGETNAME).
- ykush_serial (str) – USB serial number of the YKUSH hub
- ykush_port_board (int) – number of the YKUSH downstream port where the board is connected.
- openocd_path (str) –
(optional) path to where the OpenOCD binary is installed (defaults to system’s).
Warning
Zephyr SDK 0.9.5’s version of OpenOCD is not able to flash some of these boards.
- openocd_scripts (str) – (optional) path to where the OpenOCD scripts are installed (defaults to system’s).
- model (str) –
String which describes this model to the OpenOCD configuration. This matches the model of the board in the packaging. E.g:
- stm32f746
- stm32f103
see below for the mechanism to add more via configuration
- zephyr_board (str) – (optional) string to configure as the board model used for Zephyr builds. In most cases it will be inferred automatically.
- debug (bool) – (optional) operate in debug mode (more verbose log from OpenOCD) (defaults to false)
Overview
Per this rationale, current leakage and full power down needs necesitate of this setup to cut all power to all cables connected to the board (power and serial).
Bill of materials
- one STM32* board
- a USB A-Male to micro-B male cable (for board power, flashing and console)
- one available port on an YKUSH power switching hub (serial YKNNNNN)
Connecting the test target fixture
- connect the STM32 micro USB port with the USB A-male to B-micro to YKUSH downstream port N
- connect the YKUSH to the server system and to power as
described in
conf_00_lib_pdu.ykush_targets_add()
Configuring the system for the fixture
- Choose a name for the target: stm32MODEL-NN (where NN is a number)
- (if needed) Find the YKUSH’s serial number YKNNNNN [plug it and
run dmesg for a quick find], see
conf_00_lib_pdu.ykush_targets_add()
. - Find the board’s serial number
- Configure udev to add a name for the serial device for the
board’s serial console so it can be easily found at
/dev/tty-TARGETNAME
. Follow these instructions using the boards’ serial number. - Add the configuration block described at the top of this documentation and restart the server
Extending configuration for new models
Models not supported by current configuration can be expanded by adding a configuration block such as:
import ttbl.flasher ttbl.flasher.openocd_c._addrmaps['stm32f7'] = dict( arm = dict(load_addr = 0x08000000) ) ttbl.flasher.openocd_c._boards['stm32f746'] = dict( addrmap = 'stm32f7', targets = [ 'arm' ], target_id_names = { 0: 'stm32f7x.cpu' }, write_command = "flash write_image erase %(file)s %(address)s", config = """ # # openocd.cfg configuration from zephyr.git/boards/arm/stm32f746g_disco/support/openocd.cfg # source [find board/stm32f7discovery.cfg] $_TARGETNAME configure -event gdb-attach { echo "Debugger attaching: halting execution" reset halt gdb_breakpoint_override hard } $_TARGETNAME configure -event gdb-detach { echo "Debugger detaching: resuming execution" resume } """ ) stm32_models['stm32f746'] = dict(zephyr = "stm32f746g_disco")
-
conf_00_lib_mcu.
tinytile_add
(name, usb_serial_number, ykush_serial=None, ykush_port_board=None, ykush_port_serial=None, serial_port=None)¶ Configure a tinyTILE for the fixture described below.
The tinyTILE is a miniaturization of the Arduino/Genuino 101 (see https://www.zephyrproject.org/doc/boards/x86/tinytile/doc/board.html).
The fixture used by this configuration uses a YKUSH hub for power switching, no debug/JTAG interface and allows for an optional external serial port using an USB-to-TTY serial adapter.
Add to a server configuration file:
tinytile_add("ti-NN", "SERIALNUMBER", "YKNNNNN", PORTNUMBER, [ykush_port_serial = N2,] [serial_port = "/dev/tty-NAME"])
restart the server and it yields:
$ tcf list local/ti-NN
Parameters: - name (str) – name of the target
- usb_serial_number (str) – USB serial number for the tinyTILE
- ykush_serial (str) – USB serial number of the YKUSH hub
- ykush_port_board (int) – number of the YKUSH downstream port where the board is connected.
- ykush_port_serial (int) – (optional) number of the YKUSH downstream port where the board’s serial port is connected.
- serial_port (str) – (optional) name of the serial port
(defaults to
/dev/tty-NAME
)
Overview
The tinyTILE is powered via the USB connector. The tinyTILE does not export a serial port over the USB connector–applications loaded onto it might create a USB serial port, but this is not necessarily so all the time.
Thus, for ease of use this fixture connects an optional external USB-to-TTY dongle to the TX/RX/GND lines of the tinyTILE that allows a reliable serial console to be present. To allow for proper MCU board reset, this serial port has to be also power switched on the same YKUSH hub (to avoid ground derivations).
For the serial console output to be usableq, the Zephyr Apps configuration has to be altered to change the console to said UART. The client side needs to be aware of that (via configuration, for example, to the Zephyr App Builder).
When the serial dongle is in use, the power rail needs to first power up the serial dongle and then the board.
Per this rationale, current leakage and full power down needs necesitate of this setup to cut all power to all cables connected to the board (power and serial).
This fixture uses
ttbl.tt.tt_dfu
to implement the target; refer to it for implementation details.Bill of materials
- two available ports on an YKUSH power switching hub (serial YKNNNNN); only one if the serial console will not be used.
- a tinyTILE board
- a USB A-Male to micro-B male cable (for board power)
- a USB-to-TTY serial port dongle
- three M/M jumper cables
Connecting the test target fixture
- (if not yet connected), connect the YKUSH to the server system
and to power as described in
conf_00_lib_pdu.ykush_targets_add()
- connect the Tiny Tile’s USB port to the YKUSH downstream port N1
- (if a serial console will be connected) connect the USB-to-TTY serial adapter to the YKUSH downstream port N2
- (if a serial console will be connected) connect the USB-to-TTY
serial adapter to the Tiny Tile with the M/M jumper cables:
- USB FTDI Black (ground) to Tiny Tile’s serial ground pin (fourth pin from the bottom)
- USB FTDI White (RX) to the Tiny Tile’s TX.
- USB FTDI Green (TX) to Tiny Tile’s RX.
- USB FTDI Red (power) is left open, it has 5V.
Configuring the system for the fixture
Choose a name for the target: ti-NN (where NN is a number)
(if needed) Find the YKUSH’s serial number YKNNNNN [plug it and run dmesg for a quick find], see
conf_00_lib_pdu.ykush_targets_add()
.Find the board’s serial number.
Note these boards, when freshly plugged in, will only stay in DFU mode for five seconds and then boot Zephyr (or whichever OS they have), so the USB device will dissapear. You need to run the lsusb or whichever command you are using quick (or monitor the kernel output with dmesg -w).
Configure udev to add a name for the serial device that represents the USB-to-TTY dongle connected to the target so we can easily find it at
/dev/tty-TARGETNAME
. Different options for USB-to-TTY dongles with or without a USB serial number.
TTBD configuration library to add targets that use Provisioning OS to flash OS images ———————————————————————
-
conf_00_lib_pos.
nw_indexes
(nw_name)¶ Return the network indexes that correspond to a one or two letter network name.
Parameters: nw_name (str) – a one or two letter network name in the set a-zA-Z. Returns: x, y, vlan_id; x and y are meant to be used for creating IP addresses (IPv4 192.x.y.0/24, IPv6 fc00::x:y:0/112) (yes, 192.x.y/0.24 with x != 168 is not private, but this is supposed to be running inside a private network anyway, so you won’t be able to route there).
-
conf_00_lib_pos.
nw_pos_add
(nw_name, power_rail=None, mac_addr=None, vlan=None, ipv4_prefix_len=24, ipv6_prefix_len=112)¶ Adds configuration for a network with Provisioning OS support.
This setups the network with the power rails needed for targets that can boot Provisioning OS to deploy images to their hard drives.
For example, to add nwb, 192.168.98.0/24 with the server on 192.168.98.1 adding proxy port redirection from the isolated network to that upstream the server:
>>> x, y, _ = nw_indexes('b') >>> interconnect = nw_pos_add( >>> 'b', mac_addr = '00:50:b6:27:4b:77', >>> power_rail = [ >>> ttbl.pc.dlwps7('http://admin:1234@sp7/4'), >>> # disable the proxy redirection, using tinyproxy >>> # running on :8888 >>> # Mirrors of Clear and other stuff, see distro_mirrors below >>> ttbl.socat.pci('tcp', "192.%d.%d.1" % (x, y), 1080, >>> 'linux-ftp.jf.intel.com', 80), >>> ttbl.socat.pci('tcp', "192.%d.%d.1" % (x, y), 1443, >>> 'linux-ftp.jf.intel.com', 443), >>> ]) >>> >>> interconnect.tags_update(dict( >>> # implemented by tinyproxy running in the server >>> ftp_proxy = "http://192.%d.%d.1:8888" % (x, y), >>> http_proxy = "http://192.%d.%d.1:8888" % (x, y), >>> https_proxy = "http://192.%d.%d.1:8888" % (x, y),
Note how first we calculate, from the network names the nibbles we’ll use for IP addresses. This is only needed because we are adding extras to the basic configuration.
Parameters: - nw_name (str) –
network name, which must be one or two ASCII letters, uppwer or lowercase; see best naming practices.
>>> letter = "aD"
would yield a network called nwAD.
- mac_addr (str) –
(optional) if specified, this is connected to the physical network adapter in the server with the given MAC address in six 16-bit hex digits (hh:hh:hh:hh:hh:hh).
Note the TCF server will take over said interface, bring it up, down, remove and add IP address etc, so it cannot be shared with any interface being used for other things.
- vlan (int) –
(optional) use Ethernet VLANs
- None: do not use vlans (default)
- 0: configure this network to use a VLAN on the physical interface; the VLAN ID is calculated from the network name.
- N > 0: the number is used as the VLAN ID.
- power_rail (list) –
(optional) list of
ttbl.power.impl_c
objects that control power to this network.This can be used to power on/off switches, start daemons, etc when the network is started:
>>> power_rail = [ >>> # power on the network switch plugged to PDU sp7, socket 4 >>> ttbl.pc.dlwps7('http://admin:1234@sp7/4'), >>> # start two port redirectors to a proxy >>> ttbl.socat.pci('tcp', "192.168.%d.1" % nw_idx, 1080, >>> 'proxy-host.domain', 80), >>> ttbl.socat.pci('tcp', "192.168.%d.1" % nw_idx, 1443, >>> 'proxy-host.domain', 443), >>> ]
Returns: the interconect object added
- nw_name (str) –
-
conf_00_lib_pos.
pos_target_name_split
(name)¶
-
conf_00_lib_pos.
target_pos_setup
(target, nw_name, pos_boot_dev, linux_serial_console_default, pos_nfs_server=None, pos_nfs_path=None, pos_rsync_server=None, boot_config=None, boot_config_fix=None, boot_to_normal=None, boot_to_pos=None, mount_fs=None, pos_http_url_prefix=None, pos_image=None, pos_partsizes=None)¶ Given an existing target, add to it metadata used by the Provisioning OS mechanism.
Parameters: - nw_name (str) – name of the network target that provides POS services to this target
- pos_boot_dev (str) – which is the boot device to use,
where the boot loader needs to be installed in a boot
partition. e.g.:
sda
for /dev/sda ormmcblk01
for /dev/mmcblk01. - linux_serial_console_default (str) –
which device the target sees as the system’s serial console connected to TCF’s boot console.
If DEVICE (eg: ttyS0) is given, Linux will be booted with the argument console=DEVICE,115200.
- pos_nfs_server (str) –
(optional) IPv4 address of the NFS server that provides the Provisioning OS root filesystem
e.g.: 192.168.0.6
Default is None, and thus taking from what the boot interconnect declares in the same metadata.
- pos_nfs_path (str) –
path in the NFS server for the Provisioning OS root filesystem.
Normally this is set from the information exported by the network nw_name.
e.g.: /home/ttbd/images/tcf-live/x86_64/.
Default is None, and thus taking from what the boot interconnect declares in the same metadata.
- pos_rsync_server (str) –
(optional) RSYNC URL where the Provisioning OS images are available.
eg: 192.168.0.6::images
Default is None, and thus taking from what the boot interconnect declares in the same metadata.
- boot_config (str) –
(optional)
capability
to configure the boot loader.e.g.:
*uefi*
(default) - boot_config_fix (str) –
(optional)
capability
to fix the boot loader configuration.e.g.:
*uefi*
(default) - boot_to_normal (str) –
(optional)
capability
to boot the system in normal (non provisioning) modee.g.:
*pxe*
(default) - boot_to_pos (str) –
(optional)
capability
to boot the system in provisioning mode.e.g.:
*pxe*
(default) - mount_fs (str) –
(optional)
capability
to partition, select and mount the root filesystem during provisioning modee.g.:
*multiroot*
(default) - pos_http_url_prefix (str) –
(optional) prefix to give to the kernel/initrd for booting over TFTP or HTTP. Note: you want a trailing slash:
e.g.: http://192.168.0.6/ttbd-pos/x86_64/ for HTTP boot
e.g.: subdir for TFTP boot from the subdir subdirectory
Default is None, and thus taking from what the boot interconnect declares in the same metadata.
- pos_image (str) –
(optional) name of the Provisioning image to use, which will be used for the kernel name, initrd name and NFS root path:
- kernel: vmlinuz-POS-IMAGE
- initrd: initrd-POS-IMAGE
- root-path: POS-IMAGE/ARCHITECTURE
e.g.: tcf-live (default)
Part str pos_partsizes: (optional) sizes of the different partitions when using the multiroot system to manage the target’s disk; see Partition Size specification.
e.g.: “1:10:30:20” (default)
-
conf_00_lib_pos.
pos_target_add
(name, mac_addr, power_rail, boot_disk, pos_partsizes, linux_serial_console, target_type=None, target_type_long=None, index=None, network=None, power_on_pre_hook=None, extra_tags=None, pos_nfs_server=None, pos_nfs_path=None, pos_rsync_server=None, boot_config=None, boot_config_fix=None, boot_to_normal=None, boot_to_pos=None, mount_fs=None, pos_http_url_prefix=None, pos_image=None, ipv4_prefix_len=24, ipv6_prefix_len=112)¶ Add a PC-class target that can be provisioned using Provisioning OS.
Parameters: - name (str) –
target’s name, following the convention *TYPE-NNNETWORK*:
- TYPE is the target’s short type that describes targets that are generally the same
- NN is a number 2 to 255
- NETWORK is the name of the network it is connected to (the network target is actuall called nwNETWORK), see naming *networks.
>>> pos_target_add('nuc5-02a', ..., target_type = "Intel NUC5i5U324")
- mac_addr (str) –
MAC address for this target on its connection to network nwNETWORK.
Can’t be the same as any other MAC address in the system or that network. It shall be in the standard format of six hex digits separated by colons:
>>> pos_target_add('nuc5-02a', 'c0:3f:d5:67:07:81', ...)
- power_rail (str) –
Power control instance to power switch this target, eg:
>>> pos_target_add('nuc5-02a', 'c0:3f:d5:67:07:81', >>> ttbl.pc.dlwps7("http://admin:1234@POWERSWITCHANEM/3"), >>> ...)
This can also be a list of these if multiple components need to be powered on/off to power on/off the target.
>>> pos_target_add('nuc5-02a', 'c0:3f:d5:67:07:81', >>> [ >>> ttbl.pc.dlwps7("http://admin:1234@POWERSWITCHANEM/3"), >>> ttbl.pc.dlwps7("http://admin:1234@POWERSWITCHANEM/4"), >>> ttbl.ipmi.pci("BMC_HOSTNAME") >>> ], >>> ...) >>>
- power_rail –
Address of the
Digital Logger Web Power Switch
in the form [USER:PASSWORD@]HOSTNAME/PLUGNUMBER.LEGACY
eg: for a target nuc5-02a connected to plug #5 of a DLWPS7 PDU named sp10
>>> pos_target_add('nuc5-02a', power_rail_dlwps = 'sp10/5', ...)
Note there has to be at least one power spec given
- boot_disk (str) –
base name of the disk (as seen by Linux) from which the device will boot to configure it as a boot loader and install a root filesystem on it
eg for nuc5-02a:
>>> pos_target_add("nuc5-2a", MAC, POWER, 'sda')
Note /dev/ is not needed.
- pos_partsizes (str) –
sizes of the partitions to create; this is a list of four numbers with sizes in gigabytes for the boot, swap, scratch and root partitions.
eg:
>>> pos_target_add("nuc5-2a", ..., pos_partsizes = "1:4:10:5")
will create in this target a boot partition 1 GiB in size, then a swap partition 4 GiB, a scratch partition 10 GiB and then multiple root filesystem partitons of 5 GiB each (until the disk is exhausted).
- linux_serial_console (str) –
name of the device that Linux sees when it boots as a serial console
eg:
>>> pos_target_add("nuc5-02a", ... linux_serial_console = "ttyS0"...) >>> pos_target_add("nuc6-03b", ... linux_serial_console = "ttyUSB0"...)
Note /dev/ is not needed and that this is the device the target sees, not the server.
- target_type (str) –
(optional) override target’s type (guessed from the name), which will be reported in the type target metadata; eg, for Intel NUC5i5:
>>> pos_target_add("nuc5-02a", ..., target_type = "Intel NUC5i5U324")
The HW usually has many different types that are extremely similar; when such is the case, the type can be set to a common prefix and the tag type_long then added to contain the full type name (this helps simplifying the setup); see target_type_long and extra_tags below.
- target_type_long (str) – (optional) long version of the target type (see above). Defaults to the same as target_type
- index (int) –
(optional) override the target’s index guessed from the name with a (between 2 and 254); in the name it will be formatted with at least two decimal digits.
>>> pos_target_add("nuc5-02a", index = 3, ...)
In this case, trget nuc-02a will be assigned a default IP address of 192.168.97.3 instead of 192.168.97.2.
- network (str) –
(optional) override the network name guessed from the target’s name.
This is one or two ASCII letters, uppwer or lowercase; see best naming practices.
eg for nuc5-02c:
>>> pos_target_add("nuc5-02c", network = 'a', ...)
The network naming convention nwa of the example help keep network names short, needed for internal interface name limitation in Linux (for example). Note the IP addresses for nwX are 192.168.ascii(X).0/24; thus for nuc5-02a in the example, it’s IP address will be 192.168.168.65.2.
If the network were, for example, Gk, the IP address would be 192.71.107.2 (71 being ASCII(G), 107 ASCII(k)).
- extra_tags (dict) –
extra tags to add to the target for information
eg:
>>> pos_target_add(name_prefix = "nuc5", ..., dict( >>> fixture_usb_disk = "4289273ADF334", >>> fixture_usb_disk = "4289273ADF334" >>> ))
- power_on_pre_hook –
(optional) function the server calls before powering on the target so so it boots Provisioning OS mode or normal mode.
This might be configuring the DHCP server to offer a TFTP file or configuring the TFTP configuration file a bootloader will pick, etc; for examples, look at:
ttbl.dhcp.power_on_pre_pos_setup()
ttbl.ipmi.pci.pre_power_pos_setup()
ttbl.ipmi.pci_ipmitool.pre_power_pos_setup()
Default is
ttbl.dhcp.power_on_pre_pos_setup()
.
For other parameters possible to control the POS settings, please look at
target_pos_setup()
- name (str) –
-
conf_00_lib_pos.
qemu_iftype_to_pos_boot_dev
= {'ide': 'sda', 'scsi': 'sda', 'virtio': 'vda'}¶ Map QEMU’s hard drive interfaces to how Provisioning OS sees them as their boot device
Can be extended in configuration with:
>>> qemu_iftype_to_pos_boot_dev['NEWIFTYPE'] = 'xka1'
-
conf_00_lib_pos.
target_qemu_pos_add
(target_name, nw_name, mac_addr, ipv4_addr, ipv6_addr, consoles=None, disk_size='30G', mr_partsizes='1:2:2:10', sd_iftype='ide', extra_cmdline='', ram_megs=2048)¶ Add a QEMU virtual machine capable of booting over Provisioning OS.
This target supports one or more serial consoles, a graphics interface exported via VNC and a single hard drive using
ttbl.qemu.pc
as backend.Note this target uses a UEFI bios and defines UEFI storage space; this is needed so the right boot order is maintained.
Add to a server configuration file
/etc/ttbd-*/conf_*.py
>>> target = target_qemu_pos_add("qu-05a" >>> "nwa", >>> mac_addr = "02:61:00:00:00:05", >>> ipv4_addr = "192.168.95.5", >>> ipv6_addr = "fc00::61x:05")
See an example usage in
conf_06_default.nw_default_targets_add()
to create default targets.Extra paramenters can be added by using the extra_cmdline arguments, such as for example, to add another drive:
>>> extra_cmdline = "-drive file=%%(path)s/hd-extra.qcow2,if=virtio,aio=threads"
Adding to other networks:
>>> target.add_to_interconnect( >>> 'nwb', dict( >>> mac_addr = "02:62:00:00:00:05", >>> ipv4_addr = "192.168.98.5", >>> ipv6_addr = "fc00::62:05")
Parameters: - target_name (str) – name of the target to create
- nw_name (str) – name of the network to which this target will be connected that provides Provisioning OS services.
- mac_addr (str) –
MAC address for this target (for QEMU usually a fake one).
Will be given to the virtual device created and can’t be the same as any other MAC address in the system or the networks. It is recommended to be in the format:
>>> 02:HX:00:00:00:HY
where HX and HY are two hex digits; 02:… is a valid ethernet space.
- ipv4_addr (str) – IPv4 Address (32bits, DDD.DDD.DDD.DDD, where DDD are decimal integers 0-255) that will be assigned to this target in the network.
- ipv6_addr (str) – IPv6 Address (128bits, standard ipv6 colon format) that will be assigned to this target in the network.
- consoles (list(str)) –
(optional) names of serial consoles to create (defaults to just one, ttyS0). E.g:
>>> consoles = [ "ttyS0", "ttyS1", "com3" ]
these names are used to create the internal QEMU name and how the TCF daemon will refer to them as console names. In the machine, they will be standard serial ports in that order.
- disk_size (str) –
(optional) size specification for the target’s hard drive, as understood by QEMU’s qemu-img create program. Defaults to:
>>> disk_size = "30G"
- mr_partsizes (str) –
(optional) sizes of the different partitions when using the multiroot system to manage the target’s disk; see Partition Size specification.
e.g.: “1:10:30:20” (default)
- sd_iftype (str) – (optional) interface to use for the disks (defaults to ide as is the one most Linux distro support off the bat). Available types are (per QEMU -drive option: ide, scsi, sd, mtd, floppy, pflash, virtio)
- ram_megs (int) – (optional) size of memory in megabytes (defaults to 2048)
- extra_cmdline (str) – a string with extra command line to add; %(FIELD)s supported (target tags).
Notes
The hardrive gets fully reinitialized every time the server is restarted (the backend file gets wiped and re-created).
It is still possible to force a re-partitioning of the backend by setting POS property pos_reinitialize.
8.6.3. Configuration API for PDUs and other power switching equipment¶
-
conf_00_lib_pdu.
target_pdu_socket_add
(name, pc, tags=None, power=True)¶
-
conf_00_lib_pdu.
apc_pdu_add
(name, powered_on_start=None, hostname=None)¶ Add targets to control the individual sockets of an APC PDU power switch.
The APC PDU needs to be setup and configured (refer to the instructions in
ttbl.apc.pci
); this function exposes the different targets for to expose the individual sockets for debug.Add to a configuration file
/etc/ttbd-production/conf_10_targets.py
(or similar):apc_pdu_add("sp16")
where sp16 is the name (hostname of the PDU)
yields:
$ tcf list local/sp16-1 local/sp16-2 local/sp16-3 ... local/sp16-24
for a 24 outlet PDU
Parameters:
-
conf_00_lib_pdu.
dlwps7_add
(hostname, powered_on_start=None, user='admin', password='1234')¶ Add test targets to individually control each of a DLWPS7’s sockets
The DLWPS7 needs to be setup and configured; this function exposes the different targets for to expose the individual sockets for debug.
Add to a configuration file
/etc/ttbd-production/conf_10_targets.py
(or similar):dlwps7_add("sp6")
yields:
$ tcf list local/sp6-1 local/sp6-2 local/sp6-3 local/sp6-4 local/sp6-5 local/sp6-6 local/sp6-7 local/sp6-8
Power controllers for targets can be implemented instantiating an
ttbl.pc.dlwps7
:pc = ttbl.pc.dlwps7("http://admin:1234@spM/O")
where O is the outlet number as it shows in the physical unit and spM is the name of the power switch.
Parameters: Overview
Bill of materials
- a DLWPS7 unit and power cable connected to power plug
- a network cable
- a connection to a network switch to which the server is also connected (nsN)
Connecting the power switch
Ensure you have configured an class C 192.168.X.0/24, configured with static IP addresses, to which maybe only this server has access to connect IP-controlled power switches.
Follow these instructions to create a network.
You might need a new Ethernet adaptor to connect to said network (might be PCI, USB, etc).
connect the power switch to said network
assign a name to the power switch and add it along its IP address in
/etc/hosts
; convention is to call them spY, where X is a number and sp stands for Switch; Power.Warning
if your system uses proxies, you need to add spY also to the no_proxy environment varible in
/etc/bashrc
to avoid the daemon trying to access the power switch through the proxy, which will not work.with the names
/etc/hosts
, refer to the switches by name rather than by IP address.
Configuring the system
Choose a name for the power switch (spM), where M is a number
The power switch starts with IP address 192.168.0.100; it needs to be changed to 192.168.X.M:
Connect to nsN
Ensure the server access to 192.168.0.100 by adding this routing hack:
# ifconfig nsN:2 192.168.0.0/24
With lynx or a web browser, from the server, access the switch’s web control interface:
$ lynx http://192.168.0.100
Enter the default user admin, password 1234, select ok and indicate A to always accept cookies
Hit enter to refresh link redirecting to 192.168.0.100/index.htm, scroll down to Setup, select. On all this steps, make sure to hit submit for each individual change.
Lookup setup of IP address, change to 192.168.N.M (where x matches spM), gateway 192.168.N.1; hit the submit next to it.
Disable the security lockout in section Delay
Set Wrong password lockout set to zero minutes
Turn on setting power after power loss:
Power Loss Recovery Mode > When recovering after power loss select Turn all outlets on
Extra steps needed for newer units (https://dlidirect.com/products/new-pro-switch)
The new refreshed unit looks the same, but has wifi connectivity and pleny of new features, some of which need tweaking; login to the setup page again and for each of this, set the value/s and hit submit before going to the next one:
Access setings (quite important, as this allows the driver to access the same way for the previous generation of the product too):
ENABLE: allow legacy plaintext login methods
Note in (3) below it is explained why this is not a security problem in this kind of deployments.
remove the routing hack:
# ifconfig nsN:2 down
The unit’s default admin username and password are kept per original (admin, 1234):
- They are deployed in a dedicated network switch that is internal to the server; none has access but the server users (targets run on another switch).
- they use HTTP Basic Auth, they might as well not use authentication
Add an entry in
/etc/hosts
for spM so we can refer to the DLWPS7 by name instead of IP address:192.168.4.X spM
-
conf_00_lib_pdu.
raritan_emx_add
(url, outlets=8, targetname=None, https_verify=True, powered_on_start=None)¶ Add targets to control the individual outlets of a Raritan EMX PDU
This is usually a low level tool for administrators that allows to control the outlets individually. Normal power control for targets is implemented instantiating a power controller interface as described in
ttbl.raritan_emx.pci
.For example add to a
/etc/ttbd-production/conf_10_targets.py
(or similar) configuration file:raritan_emx_add("https://admin:1234@sp6")
yields:
$ tcf list local/sp6-1 local/sp6-2 local/sp6-3 local/sp6-4 local/sp6-5 local/sp6-6 local/sp6-7 local/sp6-8
Parameters: - url (str) –
URL to access the PDU in the form:
https://[USERNAME:PASSWORD@]HOSTNAME
Note the login credentials are optional, but must be matching whatever is configured in the PDU for HTTP basic authentication and permissions to change outlet state.
- outlets (int) –
number of outlets in the PDU (model specific)
FIXME: guess this from the unit directly using JSON-RPC
- targetname (str) – (optional) base name to for the target’s; defaults to the hostname (eg: for https://mypdu.domain.com it’d be mypdu-1, mypdu-2, etc).
- powered_on_start (bool) –
what to do with the power on the downstream ports:
- None: leave them as they are
- False: power them off
- True: power them on
- https_verify (bool) – (optional, default True) do or do not HTTPS certificate verification.
Setup instructions
Refer to ttbl.raritan_emx.pci.
- url (str) –
-
conf_00_lib_pdu.
usbrly08b_targets_add
(serial_number, target_name_prefix=None, power=False)¶ Set up individual power control targets for each relay of a Devantech USB-RLY08B
See below for configuration steps
Parameters: Bill of materials
- A Devantech USB-RLY08B USB relay controller (https://www.robot-electronics.co.uk/htm/usb_rly08btech.htm)
- a USB A-Male to B-female to connect it to the server
- an upstream USB A-female port to the server (in a hub or root hub)
Connecting the relay board to the system
- Connect the USB A-Male to the free server USB port
- Connect the USB B-Male to the relay board
Configuring the system for the fixture
- Choose a prefix name for the target (eg: re00) or let it be the default (usbrly08b-SERIALNUMBER).
- Find the relay board’s serial number (more methods)
- Ensure the device node for the board is accessible by the user
or groups running the daemon. See
ttbl.usbrly08b.pc
for details.
To create individual targets to control each individual relay, add in a configuration file such as
/etc/ttbd-production/conf_10_targets.py
:usbrly08b_targets_add("00023456")
which yields, after restarting the server:
$ tcf list -a local/usbrly08b-00023456-01 local/usbrly08b-00023456-02 local/usbrly08b-00023456-03 local/usbrly08b-00023456-04 local/usbrly08b-00023456-05 local/usbrly08b-00023456-06 local/usbrly08b-00023456-07
To use the relays as power controllers on a power rail for another target, create instances of
ttbl.usbrly08b.pc
:ttbl.usbrly08b.pc("0023456", RELAYNUMBER)
where RELAYNUMBER is 1 - 8, which matches the number of the relay etched on the board.
-
conf_00_lib_pdu.
ykush_targets_add
(ykush_serial, pc_url, powered_on_start=None)¶ Given the serial number for an YKUSH hub connected to the system, set up a number of targets to manually control it.
- (maybe) one target to control the whole hub
- One target per port YKNNNNN-1 to YKNNNNN-3 to control the three ports individually; this is used to debug powering up different parts of a target.
ykush_targets_add("YK34567", "http://USER:PASSWD@HOST/4")
yields:
$ tcf list local/YK34567 local/YK34567-1 local/YK34567-2 local/YK34567-3
To use then the YKUSH hubs as power controllers, create instances of
ttbl.pc_ykush.ykush
:ttbl.pc_ykush.ykush("YK34567", PORT)
where PORT is 1, 2 or 3.
Parameters: - ykush_serial (str) – USB Serial Number of the hub (finding).
- pc_url (str) –
Power Control URL
- A DLPWS7 URL (
ttbl.pc.dlwps7
), if given, will create a target YKNNNNN to power on or off the whole hub and wait for it to connect to the system. - If None, no power control targets for the whole hub will be created. It will just be expected the hub is connected permanently to the system.
- A DLPWS7 URL (
- powered_on_start (bool) –
what to do with the power on the downstream ports:
- None: leave them as they are
- False: power them off
- True: power them on
Bill of materials
YKUSH hub and it’s serial number
Note the hub itself has no serial number, but an internal device connected to its downstream port number 4 does have the YK34567 serial number.
a male to mini-B male cable for power
a USB brick for power
- (optional) a DLWPS7 power switch to control the hub’s power
- or an always-on connection to a power plug
a male to micro-B male cable for upstream USB connectivity
an upstream USB B-female port to the server (in a hub or root hub)
Note the YKNNNNN targets are always tagged idle_poweroff = 0 (so they are never automatically powered off) but not skip_cleanup; the later would never release them when idle and if a recovery fails somewhere, then none would be able to re-acquire it to recover.
8.7. ttbd Configuration API¶
Configuration API for ttbd
-
ttbl.config.
defaults_enabled
= True¶ Parse defaults configuration blocks protected by:
if ttbl.config.defaults_enabled:
This is done so that a sensible configuration can be shipped by default that is easy to deactivate in a local configuration file.
This is important as the default configuration includes the definition for three networks (nwa, nwb and nwc) that if spread around multiple servers will lead the clients to think they are the same network but spread around multiple servers, when they are in truth different networks.
-
ttbl.config.
default_qemu_start
= 90¶ Qemu target count start
By default, qemu targets we create by default get assigned IP addresses in the 90 range, so we have plenty of space before for others
-
ttbl.config.
processes
= 20¶ Number of processes to start
How many servers shall be started, each being able to run a request in parallel. Defaults to 20, but can be increased if HW is not being very cooperative.
(this is currently a hack, we plan to switch to a server that can spawn them more dynamically).
-
ttbl.config.
instance
= ''¶ Name of the current ttbd instance
Multiple separate instances of the daemon can be started, each named differently (or nothing).
-
ttbl.config.
instance_suffix
= ''¶ Filename suffix for the current ttbd instance
Per
instance
, this defines the string that is appended to different configuration files/paths that have to be instance specific but cannot be some sort of directory. Normally this is -INSTANCE (unless INSTANCE is empty).
-
ttbl.config.
target_add
(target, _id=None, tags=None, target_type=None, acquirer=None)¶ Add a target to the list of managed targets
Parameters: - target (ttbl.test_target) – target to add
- tags (dict) – Dictionary of tags that apply to the target (all tags are strings)
- name (str) – name of the target, by default taken from the target object
- target_type (str) – string describing type of the target; by default it’s taken from the object’s type.
-
ttbl.config.
interconnect_add
(ic, _id=None, tags=None, ic_type=None, acquirer=None)¶ Add a target interconnect
An interconnect is just another target that offers interconnection services to other targets.
Parameters: - ic (ttbl.interconnect_c) – interconnect to add
- _id (str) – name of the interconnect, by default taken from the object itself.
- _tags (dict) – Dictionary of tags that apply to the target (all tags are strings)
- ic_type (str) – string describing type of the interconnect; by default it’s taken from the object’s type.
-
ttbl.config.
add_authenticator
(a)¶ Add an authentication methodology, eg:
Parameters: a (ttbl.authenticator_c) – authentication engine >>> add_authentication(ttbl.ldap_auth.ldap_user_authenticator("ldap://" ...))
-
ttbl.config.
target_max_idle
= 30¶ Maximum time a target is idle before it is powered off (seconds)
-
ttbl.config.
target_owned_max_idle
= 300¶ Maximum time an acquired target is idle before it is released (seconds)
-
ttbl.config.
cleanup_files_period
= 60¶ Time gap after which call the function to perform clean-up
-
ttbl.config.
cleanup_files_maxage
= 86400¶ Age of the file after which it will be deleted
-
ttbl.config.
tcp_port_range
= (1025, 65530)¶ Which TCP port range we can use
The server will take this into account when services that need port allocation look for a port; this allows to open a certain range in a firewall, for example.
Note you want normally this in a range that allows ports that fit in some preallocated range (eg: VNC requires >= 5900).
8.8. ttbd internals¶
Internal API for ttbd
Note interfaces are added with test_target.interface_add()
, not
by subclassing. See as examples ttbl.console.interface
or
ttbl.power.interface
.
-
exception
ttbl.
test_target_e
¶ A base for all operations regarding test targets.
-
exception
ttbl.
test_target_busy_e
(target)¶
-
exception
ttbl.
test_target_not_acquired_e
(target)¶
-
exception
ttbl.
test_target_release_denied_e
(target)¶
-
exception
ttbl.
test_target_not_admin_e
(target)¶
-
class
ttbl.
test_target_logadapter_c
(logger, extra)¶ Prefix to test target logging the name of the target and if acquired, the current owner.
This is useful to correlate logs in server in client when diagnosing issues.
Initialize the adapter with a logger and a dict-like object which provides contextual information. This constructor signature allows easy stacking of LoggerAdapters, if so desired.
You can effectively pass keyword arguments as shown in the following example:
adapter = LoggerAdapter(someLogger, dict(p1=v1, p2=”v2”))
-
process
(msg, kwargs)¶ Process the logging message and keyword arguments passed in to a logging call to insert contextual information. You can either manipulate the message itself, the keyword args or both. Return the message and kwargs modified (or not) to suit your needs.
Normally, you’ll only need to override this one method in a LoggerAdapter subclass for your specific needs.
-
-
ttbl.
who_daemon
()¶ Returns the internal user for daemon operations
-
ttbl.
who_split
(who)¶ Returns a tuple with target owner specification split in two parts, the userid and the ticket. The ticket will be None if the orders specification doesn’t contain it.
-
ttbl.
who_create
(user_id, ticket=None)¶ Create a TTBD user descriptor from a user id name and a ticket
Params str user_id: user’s name / ID Params str ticket: (optional) ticket for the reservation Returns: (str) user id descriptor
-
class
ttbl.
acquirer_c
(target)¶ Interface to resource acquisition managers/schedulers
A subclass of this is instantiated to manage the access to resources that can be contended; when using the TCF remoting mechanism that deals with targets connected to the current host, for example, this is
ttbl.symlink_acquirer_c
.This can however, use any other resource manager.
The operations in here can raise any exception, but mostly the ones derived from
ttbl.acquirer_c.exception
:ttbl.acquirer_c.timeout_e
ttbl.acquirer_c.busy_e
ttbl.acquirer_c.no_rights_e
ttbl.acquirer_c.cant_release_not_owner_e
ttbl.acquirer_c.cant_release_not_acquired_e
-
exception
exception
¶ General exception for acquisition system errors
-
exception
timeout_e
¶ Timeout acquiring
-
exception
busy_e
¶ The resource is busy, can’t acquire
-
exception
no_rights_e
¶ Not enought rights to perform the operation
-
exception
cant_release_not_owner_e
¶ Cannot release since the resource is acquired by someone else
-
exception
cant_release_not_acquired_e
¶ Cannot release since the resource is not acquired
-
acquire
(who, force)¶ Acquire the resource for the given user
The implementation is allowed to spin for a little while to get it done, but in general this shall be a non-blocking operation, return busy if not available.
Parameters: - who (str) – user name
- force (bool) – force the acquisition (overriding current
user); this assumes the user who has permissions to do so;
if not, raise an exception child of
ttbl.acquirer_c.exception
.
Raises: - busy_e – if the target is busy and could not be acquired
- acquirer_c.timeout_e – some sort of timeout happened
- no_rights_e – not enough privileges for the operation
-
release
(who, force)¶ Release the resource from the given user
Parameters: - who (str) – user name
- force (bool) – force the release (overriding current
user); this assumes the user who has permissions to do so;
if not, raise an exception child of
ttbl.acquirer_c.exception
.
-
get
()¶ Return the current resource owner
-
class
ttbl.
symlink_acquirer_c
(target, wait_period=0.5)¶ The lamest file-system based mutex ever
This is a rentrant mutex implemented using symlinks (an atomic operation under POSIX).
To create it, declare the location where it will be and a string the owner. Then you can acquire() or release() it. If it is already acquired, it can spin busy wait on it (if given a timeout) or just fail. You can only release if you own it.
Why like this? We’ll have multiple processes doing this on behalf of remote clients (so it makes no sense to track owner by PID. The caller decides who gets to override and all APIs agree to use it (as it is advisory).
Warning
The reentrancy of the lock assumes that the owner will use a single thread of execution to operate under it.
Thus, the following scenario would fail and cause a race condition:
- Thread A: acquires as owner-A
- Thread B: starts to acquire as owner-A
- Thread A: releases as owner-A (now released)
- Thread B: verifies it was acquired by owner-A so passes as acquired
- Thread B: MISTAKENLY assumes it owns the mutex when it is released in reality
So use a different owner for each thread of execution.
-
acquire
(who, force)¶ Acquire the mutex, blocking until acquired
-
release
(who, force)¶ Release the resource from the given user
Parameters: - who (str) – user name
- force (bool) – force the release (overriding current
user); this assumes the user who has permissions to do so;
if not, raise an exception child of
ttbl.acquirer_c.exception
.
-
get
()¶ Return the current resource owner
-
class
ttbl.
tt_interface_impl_c
(name=None, **kwargs)¶ -
upid
= None¶ Unique Physical IDentification
flat dictionary of keys to report HW information for inventory purposes of whichever HW component is used to implement this driver.
Normally set from the driver with a call to
upid_set()
; howevr, after instantiation, more fields can be added to a driver with information that can be useful to locate a piece of HW. Eg:>>> console_pc = ttbl.console.generic_c(chunk_size = 8, >>> interchunk_wait = 0.15) >>> console_pc.upid_set("RS-232C over USB", dict( >>> serial_number = "RS33433E", >>> location = "USB port #4 front"))
-
upid_set
(name, **kwargs)¶ Set
upid
information in a single shotParameters: - name (str) – Name of the physical component that implements this interface functionality
- kwargs (dict) – fields and values (strings) to report for the physical component that implements this interface’s functionality; it is important to specify here a unique piece of information that will allow this component to be reported separately in the instrumentation section of the inventory. Eg: serial numbers or paths to unique devices.
For example:
>>> impl_object.upid_set("ACME power controller", serial_number = "XJ323232")
This is normally called from the __init__() function of a component driver, that must inherit
tt_interface_impl_c
.
-
-
class
ttbl.
tt_interface
¶ A target specific interface
This class can be subclassed and then instanced to create a target specific interface for implementing any kind of functionality. For example, the
console
, in a configuration file when the target is added:>>> target = test_target("TARGETNAME") >>> ttbl.config.target_add(target) >>> >>> target.interface_add( >>> "console", >>> ttbl.console.interface( >>> serial0 = ttbl.console.serial_pc("/dev/ttyS0") >>> serial1 = ttbl.console.serial_pc("/dev/ttyS1") >>> default = "serial0", >>> ) >>> )
creates an instance of the console interface with access to two console. The interface is then available over HTTP
https://SERVER/ttb-vN/target/TARGETNAME/console/*
A common pattern for interfaces is to be composed of multiple components, with a different implementation driver for each. For that, a class named impl_c is created to define the base interface that all the implementation drivers need to support.
To create methods that are served over the
https://SERVER/ttb-vN/target/TARGETNAME/INTERFACENAME/*
url, create methods in the subclass called METHOD_NAME with the signature:>>> def METHOD_NAME(self, target, who, args, user_path): >>> impl, component = self.arg_impl_get(args, "component") >>> arg1 = args.get('arg1', None) >>> arg2 = args.get('arg2', None) >>> ...
where:
- METHOD is put, get, post or delete (HTTP methods)
- NAME is the method name (eg: set_state)
- target is the target object this call is happening onto
- who is the (logged in) user making this call
- args is a dictionary of arguments passed by the client for the HTTP call keyed by name (a string)
- user_path is a string describing the space in the filesystem where files for this user are stored
Return values:
these methods can throw an exception on error (and an error code will be sent to the client)
a dictionary of keys and values to return to the client as JSON (so JSON encodeable).
To stream a file as output, any other keys are ignored and the following keys are interpreted, with special meaning
- stream_file: (string) the named file will be streamed to the client
- stream_offset: (positive integer) the file steam_file *will be streamed starting at the given offset.
- stream-generation: (positive monotonically increasing integer) a number that describes the current iteration of this file that might be reset [and thus bringing its apparent size to the client to zero] upon certain operations (for example, for serial console captures, when the target power cycles, this number goes up and the capture size starts at zero).
An X-stream-gen-offset header will be returned to the client with the string GENERATION OFFSET, where the current generation of the stream as provided and the offset that was used, possibly capped to the actual maximum offset are returned.
This way the client can use OFFSET+Conten-Length to tell the next offset to query.
When multiple components are used to implement the functionality of an interface or to expose multiple instruments that implement the functionality (such as in
ttbl.power.interface
orttbl.console.interface
), use methods:See as an example the
debug
class.-
impls
= None¶ List of components that implement this interface
(for interfaces that support multiple components only)
This has to be an ordered dict because the user might care about order (eg: power rails need to be executed in the given order)
-
cls
= None¶ class the implementations to for this interface are based on [set by the initial call to
impls_set()
]
-
impl_add
(name, impl)¶ Append a new implementation to the list of implementations this interface supports.
This can be used after an interface has been declared, such as:
>>> target = ttbl.test_target('somename') >>> target.interface_add('power', ttbl.power.interface(*power_rail)) >>> target.power.impl_add('newcomponent', impl_object)
Parameters: - name (str) – implementation’s name
- impl – object that defines the implementation; this must
be an instance of the class
cls
(this gets set by the first call toimpls_set()
.
-
impls_set
(impls, kwimpls, cls)¶ Record in self.impls a given set of implementations (or components)
This is only used for interfaces that support multiple components.
Parameters: - impls (dict) – list of objects of type cls or of tuples (NAME, IMPL) to serve as implementations for the interface; when non named, they will be called componentN
- impls – dictionary keyed by name of objects of type cls to serve as implementatons for the interface.
- cls (type) – base class for the implementations (eg:
ttbl.console.impl_c
)
This is meant to be used straight in the constructor of a derivative of
ttbl.tt_interface
such as:>>> class my_base_impl_c(object): >>> ... >>> class my_interface(ttbl.tt_interface): >>> def __init__(*impls, **kwimpls): >>> ttbl.tt_interface(self) >>> self.impls_set(impls, kwimplws, my_base_implc_c)
and it allows to specify the interface implementations in multiple ways:
a sorted list of implementations (which will be given generic component names such as component0, component1):
>>> target.interface_add("my", my_interface( >>> an_impl(args), >>> another_impl(args), >>> ))
COMPONENTNAME = IMPLEMENTATION (python initializers), which allows to name the components (and keep the order in Python 3 only):
>>> target.interface_add("my", my_interface( >>> something = an_impl(args), >>> someotherthing = another_impl(args), >>> ))
a list of tuples ( COMPONENTNAME, IMPLEMENTATION ), which allow to name the implementations keeping the order (in Python 2):
>>> target.interface_add("my", my_interface( >>> ( something, an_impl(args) ), >>> ( someotherthing, another_impl(args) ), >>> ))
all forms can be combined; as well, if the implementation is the name of an existing component, then it becomes an alias.
-
impl_get_by_name
(arg, arg_name='component')¶ Return an interface’s component implementation by name
-
arg_impl_get
(args, arg_name, allow_missing=False)¶ Return an interface’s component implementation by name
Given the arguments passed with an HTTP request check if one called ARG_NAME is present, we want to get args[ARG_NAME] and self.impl[ARG_NAME].
Returns: the implementation in self.impls for the component specified in the args
-
args_impls_get
(args)¶ Return a list of components by name or all if none given
If no component argument is given, return the whole list of component implementations, otherwise only the selected one.
(internal interface)
Params dict args: dictionary of arguments keyed by argument name Returns: a list of (NAME, IMPL) based on if we got an instance to run on (only execute that one) or on all the components
-
static
assert_return_type
(val, expected_type, target, component, call, none_ok=False)¶ Assert a value generated from a target interface call driver is of the right type and complain otherwise
-
static
instrument_mkindex
(name, upid, kws)¶
-
instrumentation_publish_component
(target, iface_name, index, instrument_name, upid, components=None, kws=None)¶ Publish in the target’s inventory information about the instrumentations that implements the functionalities of the components of this interface
-
instrumentation_publish
(target, iface_name)¶ Publish in the target’s inventory information about the instrumentations that implements the functionalities of the components of this interface
-
request_process
(target, who, method, call, args, files, user_path)¶ Process a request into this interface from a proxy / brokerage
When the ttbd daemon is exporting access to a target via any interface (e.g: REST over Flask or D-Bus or whatever), this implements a brige to pipe those requests in to this interface.
Parameters: - target (test_target) – target upon which we are operating
- who (str) – user who is making the request
- method (str) – ‘POST’, ‘GET’, ‘DELETE’ or ‘PUT’ (mapping to HTTP requests)
- call (str) – interface’s operation to perform (it’d map to the different methods the interface exposes)
- args (dict) – dictionary of key/value with the arguments to the call, some might be JSON encoded.
- files (dict) – dictionary of key/value with the files uploaded via forms
(https://flask.palletsprojects.com/en/1.1.x/api/#flask.Request.form) :param str user_path: Path to where user files are located
Returns: dictionary of results, call specific e.g.: >>> dict( >>> result = "SOMETHING", # convention for unified result >>> output = "something", >>> value = 43 >>> )
For an example, see
ttbl.buttons.interface
.
-
class
ttbl.
test_target
(_test_target__id, _tags=None, _type=None)¶ -
state_path
= '/var/run/ttbd'¶
-
files_path
= '__undefined__'¶ Path where files are stored
-
properties_user
= set(['pos_mode', 'tcpdump', 'pos_repartition', 'pos_reinitialize', <_sre.SRE_Pattern object>])¶ Properties that normal users (non-admins) can set when owning a target and that will be reset when releasing a target (except if listed in
properties_keep_on_release
)Note this is a global variable that can be speciazed to each class/target.
-
properties_keep_on_release
= set(['linux_options_append', <_sre.SRE_Pattern object>])¶ A test target base class
-
id
= None¶ Target name/identifier
Target’s tags
FIXME document more
-
thing_to
= None¶ List of targets this target is a thing to; see class:ttbl.things.interface
FIXME: this needs to be moved to that interface
-
fsdb
= None¶ filesystem database of target state; the multiple daemon processes use this to store information that reflect’s the target’s state.
-
kws
= None¶ Keywords that can be used to substitute values in commands, messages. Target’s tags are translated to keywords here.
ttbl.config.target_add()
will update this with the final list of tags.
-
release_hooks
= None¶ Functions to call when the target is released (things like removing tunnels the user created, resetting debug state, etc); this is meant to leave the target’s state pristine so that it does not affect the next user that acquires it. Each interface will add as needed, so it gets executed upon
release()
, under the owned lock.
-
interface_origin
= None¶ Keep places where interfaces were registered from
-
power_on_pre_fns
= None¶ Pre/post power on/off hooks
For historical reasons, these lists are here instead of on the new power interface extension–at some point they will be moved
FIXME: move to target.power.
-
to_dict
(projections=None)¶ Return all of the target’s data as a dictionary
Parameters: projections (list) – (optional) list of fields to include (default: all).
Field names can use periods to dig into dictionaries.
Field names can match
python.fnmatch
regular expressions.
-
type
¶
-
acquirer
¶
-
get_id
()¶
-
add_to_interconnect
(ic_id, ic_tags=None)¶ Add a target to an interconnect
Parameters: - ic_id (str) –
name of the interconnect; might be present in this server or another one.
If named
IC_ID#INSTANCE
, this is understood as this target has multiple connections to the same interconnect (via multiple physical or virtual network interfaces).No instance name (no
#INSTANCE
) means the default, primary connection.Thus, a target that can instantiate multiple virtual machines, for example, might want to declare them here if we need to pre-determine and pre-assign those IP addresses.
- ic_tags (dict) – (optional) dictionary of tags describing the tags for this target on this interconnect.
- ic_id (str) –
Update the tags assigned to a target
This will ensure the tags described in d are given to the target and all the values associated to them updated (such as interconnect descriptions, addresses, etc).
Parameters: It can be used to add tags to a target after it is added to the configuration, such as with:
>>> arduino101_add("a101-03", ...) >>> tclf.config.targets["a101-03"].tags_update(dict(val = 34))
-
timestamp_get
()¶
-
timestamp
()¶ Update the timestamp on the target to record last activity tme
-
owner_get
()¶ Return who the current owner of this target is
Returns: object describing the owner
-
acquire
(who, force)¶ Assign the test target to user who unless it is already taken by someone else.
Parameters: who (str) – User that is claiming the target Raises: test_target_busy_e
if already taken
-
enable
(who=None)¶ Enable the target (so it will be regularly used)
Parameters: who (str) – Deprecated
-
disable
(who=None)¶ Disable the target (so it will not be regularly used)
It still can be used, but it will be filtered out by the client regular listings.
Parameters: who (str) – Deprecated
-
property_set
(prop, value)¶ Set a target’s property
Parameters: Due to the hierarchical aspect of the key property namespace, if a property a.b is set, any property called a.b.NAME will be cleared out.
-
property_set_locked
(who, prop, value)¶ Set a target’s property (must be locked by the user)
Parameters:
-
property_get
(prop, default=None)¶ Get a target’s property
Parameters:
-
property_get_locked
(who, prop, default=None)¶ Get a target’s property
Parameters:
-
property_is_user
(name)¶ Return True if a property is considered a user property (no admin rights are needed to set it or read it).
Returns: bool
-
property_keep_value
(name)¶ Return True if a user property’s value needs to be kept.
-
release
(who, force)¶ Release the ownership of this target.
If the target is not owned by anyone, it does nothing.
Parameters: Raises: test_target_not_acquired_e
if not taken
-
target_owned_and_locked
(**kwds)¶ Ensure the target is locked and owned for an operation that requires exclusivity
Parameters: who – User that is calling the operation Raises: test_target_not_acquired_e
if the target is not acquired by anyone,test_target_busy_e
if the target is owned by someone else.
-
target_is_owned_and_locked
(who)¶ Returns if a target is locked and owned for an operation that requires exclusivity
Parameters: who – User that is calling the operation Returns: True if @who owns the target or is admin, False otherwise or if the target is not owned
-
interface_add
(name, obj)¶ Adds object as an interface to the target accessible as
self.name
Parameters: - name (str) – interface name, must be not existing already
and a valid Python identifier as we’ll be calling functions
as
target.name.function()
- obj (tt_interface) – interface implementation, an instance
of
tt_interface
which provides the details and methods to call plusttbl.tt_interface.request_process()
to handle calls from proxy/brokerage layers.
- name (str) – interface name, must be not existing already
and a valid Python identifier as we’ll be calling functions
as
-
-
class
ttbl.
interconnect_c
(_test_target__id, _tags=None, _type=None)¶ Define an interconnect as a target that provides connectivity services to other targets.
-
ttbl.
open_close
(*args, **kwds)¶
-
class
ttbl.
authenticator_c
¶ Base class that defines the interface for an authentication system
Upone calling the constructor, it defines a set of roles that will be returned by the
login()
if the tokens to be authenticated are valid-
static
login
(token, password, **kwargs)¶ Validate an authentication token exists and the password is valid.
If it is, extract whichever information from the authentication system is needed to determine if the user represented by the token is allowed to use the infrastructure and which with category (as determined by the role mapping)
Returns: None if user is not allowed to log in, otherwise a dictionary with user’s information: - roles: set of strings describing roles the user has
FIXME: left as a dictionary so we can add more information later
-
exception
error_e
¶
-
exception
unknown_user_e
¶
-
exception
invalid_credentials_e
¶
-
static
-
ttbl.
daemon_pid_add
(pid)¶
-
ttbl.
daemon_pid_check
(pid)¶
-
ttbl.
daemon_pid_rm
(pid)¶
-
class
ttbl.fsdb.
fsdb
(location)¶ This is a veeeery simple file-system based ‘DB’, atomic access
- Atomic access is implemented by storing values in the target of symlinks
- the data stored is strings
- the amount of data stored is thus limited (to 1k in OSX, 4k in Linux/ext3, maybe others dep on the FS).
Why? Because to create a symlink, there is only a system call needed and is atomic. Same to read it. Thus, for small values, it is very efficient.
NOTE: the key space is flat (no dictionaries) but we implement it with naming, such as:
l[‘a.b.c’] = 3is the equivalent to:
l[‘a’][‘b’][‘c’] = 3it also makes it way faster and easier to filter for fields.
Initialize the database to be saved in the give location directory
Parameters: location (str) – Directory where the database will be kept -
exception
exception
¶
-
keys
(pattern=None)¶ List the fields/keys available in the database
Parameters: pattern (str) – (optional) pattern against the key names must match, in the style of fnmatch
. By default, all keys are listed.
-
get_as_slist
(*patterns)¶ List the fields/keys available in the database
Parameters: patterns (list(str)) – (optional) list of patterns of fields we must list in the style of fnmatch
. By default, all keys are listed.Returns list(str, obf): list of (FLATKEY, VALUE) sorted by FLATKEY (so a.b.c, representing [‘a’][‘b’][‘c’] goes after a.b, representing [‘a’][‘b’]).
-
get_as_dict
(*patterns)¶ List the fields/keys and values available in the database
Parameters: pattern (str) – (optional) pattern against the key names must match, in the style of fnmatch
. By default, all keys are listed.Returns dict: return the keys and values
-
field_valid_regex
= <_sre.SRE_Pattern object>¶
-
set
(field, value)¶ Set a field in the database
Parameters:
-
get
(field, default=None)¶
8.8.1. User access control and authentication¶
-
class
ttbl.user_control.
User
(userid, fail_if_new=False)¶ Implement a database of users that are allowed to use this system
The information on this database is obtained from authentication systems and just stored locally for caching–it’s mainly to set roles for users.
-
file_access_lock
= <thread.lock object>¶
-
exception
user_not_existant_e
¶ Exception raised when information about a user cannot be located
-
state_dir
= None¶
-
static
is_authenticated
()¶
-
static
is_active
()¶
-
static
is_anonymous
()¶
-
is_admin
()¶
-
get_id
()¶
-
set_role
(role)¶
-
has_role
(role)¶
-
save_data
()¶
-
static
load_user
(userid)¶
-
static
create_filename
(userid)¶ Makes a safe filename based on the user ID
-
static
search_user
(userid)¶
-
-
class
ttbl.user_control.
local_user
(**kwargs)¶ Define a local anonymous user that we can use to skip authentication on certain situations (when the user starts the daemon as such, for example).
See https://flask-login.readthedocs.org/en/latest/#anonymous-users for the Flask details.
-
save_data
()¶
-
is_authenticated
()¶
-
is_anonymous
()¶
-
is_active
()¶
-
is_admin
()¶
-
-
class
ttbl.auth_ldap.
authenticator_ldap_c
(url, roles=None)¶ Use LDAP to authenticate users
To configure, create a config file that looks like:
>>> import ttbl.auth_ldap >>> >>> add_authenticator(timo.auth_ldap.authenticator_ldap_c( >>> "ldap://URL:PORT", >>> roles = { >>> .... >>> roles = { >>> 'role1': { 'users': [ "john", "lamar", ], >>> 'groups': [ "Occupants of building 3" ] >>> }, >>> 'role2': { 'users': [ "anthony", "mcclay" ], >>> 'groups': [ "Administrators", >>> "Knights who say ni" ] >>> }, >>> }))
The roles dictionary determines who gets to be an admin or who gets access to XYZ resources.
This will make that john, lamar and any user on the group Occupants of building 3 to have the role role1.
Likewise for anthony, mcclay and any user who is a member of either the group Administrators or the group Knights who say ni, they are given role role2
Parameters: -
login
(email, password, **kwargs)¶ Validate a email|token/password combination and pull which roles it has assigned
Returns: set listing the roles the token/password combination has according to the configuration Return type: set Raises: authenticator_c.invalid_credentials_e if the token/password is not valid Raises: authenticator_c.error_e if any kind of error during the process happens
-
-
class
ttbl.auth_ldap.
ldap_map_c
(url, bind_username=None, bind_password=None, max_age=200)¶ General LDAP mapper
This objects allows maps and caches entities in an LDAP database, to speed up looking up of values from other values.
For example, to get the displayName based on the sAMAccountName:
>>> account_name = self.lookup("Some User", 'displayName', 'sAMAccountName')
looks up an LDAP entity that has Some User as field displayName and returns it’s sAMAccountName.
This does the same, but caches the results so that the next time it looks it up, it doesn’t need to hit LDAP:
>>> account_name = self.lookup_cached("Some User", 'displayName', 'sAMAccountName')
this object caches the objects, as we assume LDAP behaves mostly as a read-only database.
Parameters: - url (str) –
URL of the LDAP server; in the form ldap[s]://[BIND_USERNAME[:BIND_PASSWORD]]@HOST:PORT.
The bind username and password can be specified in the arguments below (eg: when either have a @ or : and they follow the same rules for password discovery.
- bind_username (str) – (optional) login for binding to LDAP; might not be needed in all setups.
- bind_password (str) –
(optional) password to binding to LDAP; migt not be needed in all setups.
Will be handled by
commonl.password_get()
, so passwords such as:- KEYRING will ask the accounts keyring for the password
- for service url for username bind_username
- KEYRING:SERVICE will ask the accounts keyring for the password
- for service SERVICE for username bind_username
- FILENAME:PATH will read the password from filename PATH.
otherwise is considered a hardcoded password.
- max_age (int) – (optional) number of seconds each cached entry is to live. Once an entry is older than this, the LDAP server is queried again for that entry.
-
exception
error_e
¶
-
exception
invalid_credentials_e
¶
-
max_age
= None¶ maximum number of seconds an entry will live in the cache before it is considered old and refetched from the servers.
-
lookup
(what, field_lookup, field_report)¶ Lookup the first LDAP record whose field field_lookup contains a value of what
Returns: the value of the field field_record for the record found; None if not found or the record doesn’t have the field.
- url (str) –
-
class
ttbl.auth_localdb.
authenticator_localdb_c
(name, users)¶ Use a simple DB to authenticate users
To configure, create a config file that looks like:
>>> import ttbl.auth_localdb >>> >>> add_authenticator(ttbl.auth_localdb.authenticator_localdb_c( >>> "NAME", >>> [ >>> ['user1', 'password1', 'role1', 'role2', 'role3'...], >>> ['user2', 'password2', 'role1', 'role4', 'role3' ], >>> ['user3', None, 'role2', 'role3'...], >>> ['user4', ], >>> ]))
Each item in the users list is a list containing:
- the user id (userX)
- the password in plaintext (FIXME: add digests); if empty, then the user has no password.
- list of roles (roleX)
Parameters: -
login
(email, password, **kwargs)¶ Validate a email|token/password combination and pull which roles it has assigned
Returns: set listing the roles the token/password combination has according to the configuration Return type: set Raises: authenticator_c.invalid_credentials_e if the token/password is not valid Raises: authenticator_c.error_e if any kind of error during the process happens
-
class
ttbl.auth_party.
authenticator_party_c
(roles=None, local_addresses=None)¶ Life is a party! Authenticator that allows anyone to log in and be an admin.
To configure, create a config file that looks like:
>>> import timo.auth_party >>> >>> add_authenticator(timo.auth_party.authenticator_party_c( >>> [ 'admin', 'user', 'role3', ...], >>> local_addresses = [ '127.0.0.1', '192.168.0.2' ] )) >>>
Where you list the roles that everyone will get all the time.
Normally you want this only for debugging or for local instances. Note you can set a list of local addresses to match against (strings or regular expressions) which will enforce that only authentication from those addresses will just be allowed.
FIXME: check connections are coming only from localhost
-
login
(email, password, **kwargs)¶ Validate a email|token/password combination and pull which roles it has assigned
Kwargs: ‘remote_addr’ set to a string describing the IP address where the connection comes from. Returns: set listing the roles the token/password combination has according to the configuration Return type: set Raises: authenticator_c.invalid_credentials_e if the token/password is not valid Raises: authenticator_c.unknown_user_e if there are remote addresses initialized and the request comes from a non-local address. Raises: authenticator_c.error_e if any kind of error during the process happens
-
8.8.2. Console Management Interface¶
8.8.2.1. Access target’s serial consoles / bidirectional channels¶
Implemented by ttbl.console.interface
.
-
class
ttbl.console.
impl_c
(command_sequence=None, command_timeout=5)¶ Implementation interface for a console driver
The target will list the available consoles in the targets’ consoles tag
param list command_sequence: (optional) when the console is enabled (from
target.console.enable
or when powering up a target that also enables the console at the same time viatarget.power.on
), run a sequence of send/expect commands.This is commonly used when the serial line is part of a server and a set of commands have to be typed before the serial connection has to be established. For example, for some Lantronix KVM serial servers, when accessing the console over SSH we need to wait for the prompt and then issue a connect serial command:
>>> serial0_pc = ttbl.console.ssh_pc( >>> "USER:PASSWORD@LANTRONIXHOSTNAME", >>> command_sequence = [ >>> ( "", >>> # command prompt, 'CR[USERNAME@IP]> '... or not, so just >>> # look for 'SOMETHING> ' >>> # ^ will not match because we are getting a CR >>> re.compile("[^>]+> ") ), >>> ( "connect serial
- “,
>>> "To exit serial port connection, type 'ESC exit'." ), >>> ], >>> extra_opts = { >>> # old, but that's what the Lantronix server has :/ >>> "KexAlgorithms": "diffie-hellman-group1-sha1", >>> "Ciphers" : "aes128-cbc,3des-cbc", >>> })
This is a list of tupples ( SEND, EXPECT ); SEND is a string sent over to the console (unless the empty string; then nothing is sent). EXPECT can be anything that can be fed to Python’s Expect
expect
function:- a string
- a compiled regular expression
- a list of such
The timeout for each expectation is hardcoded to five seconds (FIXME).
Note for this to work, the driver that uses this class must call the _command_sequence_run() method from their
impl_c.enable()
methods.param int command_timeout: (optional) number of seconds to wait for a response ot a command before declaring a timeout
-
exception
exception
¶ General console driver exception
-
exception
timeout_e
¶ Console enablement command sequence timed out
-
disable
(target, component)¶ Disable a console
Parameters: console (str) – (optional) console to disable; if missing, the default one.
-
state
(target, component)¶ Return the given console’s state
Parameters: console (str) – (optional) console to enable; if missing, the default one Returns: True if enabled, False otherwise
-
setup
(target, component, parameters)¶ Setup console parameters (implementation specific)
Check
impl_c.read()
for common parametersParameters: parameters (dict) – dictionary of implementation specific parameters Returns: nothing
-
read
(target, component, offset)¶ Return data read from the console since it started recording from a given byte offset.
Check
impl_c.read()
for common parametersParams int offset: offset from which to read Returns: data dictionary of values to pass to the client; the data is expected to be in a file which will be streamed to the client. >>> return dict(stream_file = CAPTURE_FILE, >>> stream_generation = MONOTONIC, >>> stream_offset = OFFSET)
this allows to support large amounts of data automatically; the generation is a number that is monotonically increased, for example, each time a power cycle happens. This is basically when a new file is created.
-
size
(target, component)¶ Return the amount of data currently read from the console.
Check
impl_c.read()
for common parametersReturns: number of bytes read from the console since the last power up.
-
write
(target, component, data)¶ Write bytes the the console
Check
impl_c.read()
for common parametersParameters: data – string of bytes or data to write to the console
-
class
ttbl.console.
interface
(*impls, **kwimpls)¶ Interface to access the target’s consoles
An instance of this gets added as an object to the target object with:
>>> ttbl.config.targets['qu05a'].interface_add( >>> "console", >>> ttbl.console.interface( >>> ttyS0 = ttbl.console.serial_device("/dev/ttyS5") >>> ttyS1 = ttbl.capture.generic("ipmi-sol") >>> default = "ttyS0", >>> ) >>> )
Note how default has been made an alias of ttyS0
Parameters: impls (dict) – dictionary keyed by console name and which values are instantiation of console drivers inheriting from
ttbl.console.impl_c
or names of other consoles (to sever as aliases).Names have to be valid python symbol names following the following convention:
- serial* RS-232C compatible physical Serial port
- sol* IPMI Serial-Over-Lan
- ssh* SSH session (may require setup before enabling)
A default console is set by declaring an alias as in the example above; however, a preferred console
This interface:
- supports N > 1 channels per target, of any type (serial, network, etc)
- allows raw traffic (not just pure ASCII), for example for serial console escape sequences, etc
- the client shall not need to be constantly reading to avoid
loosing data; the read path shall be (can be) implemented to
buffer everything since power on (by creating a power control
driver
ttbl.power.impl_c
that records everything; seettbl.console.serial_pc
for an example - allows setting general channel parameters
-
get_setup
(_target, _who, args, _files, _user_path)¶
-
put_setup
(target, who, args, _files, _user_path)¶
-
get_list
(_target, _who, _args, _files, _user_path)¶
-
put_enable
(target, who, args, _files, _user_path)¶
-
put_disable
(target, who, args, _files, _user_path)¶
-
get_state
(target, _who, args, _files, _user_path)¶
-
get_read
(target, who, args, _files, _user_path)¶
-
get_size
(target, _who, args, _files, _user_path)¶
-
put_write
(target, who, args, _files, _user_path)¶
-
ttbl.console.
generation_set
(target, console)¶
-
class
ttbl.console.
generic_c
(chunk_size=0, interchunk_wait=0.2, command_sequence=None, escape_chars=None)¶ General base console implementation
This object will implement a base console driver that reads from a file in the (the read file) and writes to a file (the write file) in the local filesystem.
The read / write files are named
console-CONSOLENAME.{read,write}
and are located in the target’s state directory. Thus there is no need for state, since the parameters are available in the call.The idea is that another piece (normally a power control unit that starts a background daemon) will be reading from the console in the target system and dumping data to the read file. For writing, the same piece takes whatever data is being provided and passes it on, or it can be written directly.
See
serial_pc
for an example of this model implemented over a tranditional serial port andttbl.ipmi.sol_console_pc
for implementing an IPMI Serial-Over-Lan console.ssh_pc
for implementing a console simulated over an SSH connection.Parameters: - chunk_size (int) – (optional) when writing, break the writing in chunks of this size and wait interchunk_wait in between sending each chunk. By default is 0, which is disabled.
- interchunk_wait (float) – (optional) if chunk_size is enabled, time to wait in seconds in between each chunk.
- escape_chars (dict) –
(optional) dictionary of escape sequences for given characters to prefix in input stream.
If given, this is a dictionary of characters to strings, eg:
>>> escape_chars = { >>> '': '', >>> '~': '\', >>> }
in this case, when the input string to send to the device contains a x1b (the ESC character), it will be prefixed with another one. If it contains a ~, it will be prefixed with a backslash.
-
state
(target, component)¶ Return the given console’s state
Parameters: console (str) – (optional) console to enable; if missing, the default one Returns: True if enabled, False otherwise
-
read
(target, component, offset)¶ Return data read from the console since it started recording from a given byte offset.
Check
impl_c.read()
for common parametersParams int offset: offset from which to read Returns: data dictionary of values to pass to the client; the data is expected to be in a file which will be streamed to the client. >>> return dict(stream_file = CAPTURE_FILE, >>> stream_generation = MONOTONIC, >>> stream_offset = OFFSET)
this allows to support large amounts of data automatically; the generation is a number that is monotonically increased, for example, each time a power cycle happens. This is basically when a new file is created.
-
size
(target, component)¶ Return the amount of data currently read from the console.
Check
impl_c.read()
for common parametersReturns: number of bytes read from the console since the last power up.
-
write
(target, component, data)¶ Write bytes the the console
Check
impl_c.read()
for common parametersParameters: data – string of bytes or data to write to the console
-
class
ttbl.console.
serial_pc
(serial_file_name=None)¶ Implement a serial port console and data recorder
This class implements two interfaces:
power interface: to start a serial port recorder in the background as soon as the target is powered on. Anything read form the serial port is written to the console-NAME.read file and anything written to it is written to console-NAME.write file, which is sent to the serial port.
The power interface is implemented by subclassing
ttbl.power.socat_pc
, which starts socat as daemon to serve as a data recorder and to pass data to the serial port from the read file.console interface: interacts with the console interface by exposing the data recorded in console-NAME.read file and writing to the console-NAME.write file.
Params str serial_file_name: (optional) name of the serial port file, which can be templated with %(FIELD)s as per class:ttbl.power.socat_pc (the low level implementation).
By default, it uses /dev/tty-TARGETNAME, which makes it easier to configure. The tty name linked to the target can be set with udev.
For example, create a serial port recoder power control / console driver and insert it into the power rail and the console of a target:
>>> serial0_pc = ttbl.console.serial_pc(console_file_name) >>> >>> ttbl.config.targets[name].interface_add( >>> "power", >>> ttbl.power.interface( >>> ... >>> serial0_pc, >>> ... >>> ) >>> ttbl.config.targets[name].interface_add( >>> "console", >>> ttbl.console.interface( >>> serial0 = serial0_pc, >>> default = "serial0", >>> ) >>> )
-
on
(target, component)¶ Power on the component
Parameters: - target (ttbl.test_target) – target on which to act
- component (str) – name of the power controller we are modifying
-
class
ttbl.console.
ssh_pc
(hostname, port=22, chunk_size=0, interchunk_wait=0.1, extra_opts=None, command_sequence=None)¶ Implement a serial port over an SSH connection
This class implements two interfaces:
power interface: to start an SSH connection recorder in the background as soon as the target is powered on.
The power interface is implemented by subclassing
ttbl.power.socat_pc
, which starts socat as daemon to serve as a data recorder and to pass data to the connection from the read file.Anything read form the SSH connection is written to the console-NAME.read file and anything written to it is written to console-NAME.write file, which is sent to the serial port.
console interface: interacts with the console interface by exposing the data recorded in console-NAME.read file and writing to the console-NAME.write file.
Params str hostname: USER[:PASSWORD]@HOSTNAME for the SSH server
Parameters: - port (int) – (optional) port to connect to (defaults to 22)
- exta_ports (dict) –
(optional) dictionary of extra SSH options and values to set in the SSH configuration (as described in ssh_config(5).
Note they all have to be strings; e.g.:
>>> serial0_pc = ttbl.console.ssh_pc( >>> "USER:PASSWORD@HOSTNAME", >>> extra_opts = { >>> "Ciphers": "aes128-cbc,3des-cbc", >>> "Compression": "no", >>> })
Be careful what is changed, since it can break operation.
See
generic_c
for descriptions on chunk_size and interchunk_wait,impl_c
for command_sequence.For example:
>>> ssh0_pc = ttbl.console.ssh_pc("USERNAME:PASSWORD@HOSTNAME") >>> >>> ttbl.config.targets[name].interface_add( >>> "power", >>> ttbl.power.interface( >>> ... >>> ssh0_pc, >>> ... >>> ) >>> ttbl.config.targets[name].interface_add( >>> "console", >>> ttbl.console.interface( >>> ssh0 = ssh0_pc, >>> ) >>> )
- FIXME:
- pass password via agent? file descriptor?
-
on
(target, component)¶ Power on the component
Parameters: - target (ttbl.test_target) – target on which to act
- component (str) – name of the power controller we are modifying
-
setup
(target, component, parameters)¶ Setup console parameters (implementation specific)
Check
impl_c.read()
for common parametersParameters: parameters (dict) – dictionary of implementation specific parameters Returns: nothing
8.8.3. Debugging Interface¶
-
ttbl.
debug
¶ alias of
ttbl.debug
8.8.4. Power Control Interface¶
8.8.4.1. Control power to targets¶
This interface provides means to power on/off targets and the invidual components that compose the power rail of a target.
The interface is implemented by ttbl.power.interface
needs to
be attached to a target with ttbl.test_target.interface_add()
:
>>> ttbl.config.targets[NAME].interface_add(
>>> "INTERFACENAME",
>>> ttbl.power.interface(
>>> component0,
>>> component1,
>>> ...
>>> )
each component is an instance of a subclass of
ttbl.power.impl_c
, which implements the actual control over
the power unit, such as:
- PDU socket:
Digital Logger's Web Power Switch 7
,YKush
power switch hub, raritan EMX - relays:
USB-RLY08b
- control over IPMI:
ttbl.ipmi.pci
Also power components are able to:
- start / stop daemons in the server (socat, rsync, qemu, openocd…)
delay
the power on/off sequence- wait for some particular conditions to happen: a file
dissapearing
orappearing
, a USB device isdetected
in the system
-
class
ttbl.power.
impl_c
(paranoid=False)¶ Implementation interface to drive a power component
A power component is an individual entity that provides one of the pieces of a power rail needed to power up a target.
It can be powered on, off and it’s state can be acquired.
A driver is made by subclassing this object, implementing the
on()
,off()
andget()
methods and then adding it to a power interfacettbl.power.interface
which is then attached to a target.Drivers implement the specifics to run the switches PDUs, relays, etc
Note these are designed to be as stateless as possible; in most cases all the state needed can be derived from the target and component parameters passed to the methods.
Remember no information can be stored self or target as the next call can come in another daemon implementing the same target. For storage, use the target’s fsdb interface.
ALL the methods have to be blocking, so the operation is supposed to be completed and the status has to have been changed by the time the call returns. If the operation cannot be trusted to be blocking, set the paranoid parameter to True so the power interface will confirm the change has re-happened and enforce it again.
Parameters: paranoid (bool) – don’t trust the operation is really blocking, as it should, so double check the state changed happen and retry if not. -
power_on_recovery
= None¶ If the power on fails, automatically retry it by powering first off, then on again
-
paranoid_get_samples
= None¶ for paranoid power getting, now many samples we need to get that are the same for the value to be considered stable
-
exception
retry_all_e
(wait=None)¶ Exception raised when a power control implementation operation wants the whole power-rail reinitialized
-
on
(target, component)¶ Power on the component
Parameters: - target (ttbl.test_target) – target on which to act
- component (str) – name of the power controller we are modifying
-
-
class
ttbl.power.
interface
(*impls, **kwimpls)¶ Power control interface
Implements an interface that allows to control the power to a target, which can be a single switch or a whole power rail of components that have to be powered on and off in an specific sequence.
-
get_list
(target, _who, _args, _files, _user_path)¶
-
get_get
(target, _who, _args, _files, _user_path)¶
-
put_on
(target, who, args, _files, _user_path)¶
-
put_off
(target, who, args, _files, _user_path)¶
-
put_cycle
(target, who, args, _files, _user_path)¶
-
put_reset
(target, who, args, _files, _user_path)¶
-
-
class
ttbl.power.
fake_c
(paranoid=False)¶ Fake power component which stores state in disk
Note this object doesn’t have to know much parameters on initialization, which allows us to share implementations.
It can rely on the target and component parameters to each method to derive where to act.
-
on
(target, component)¶ Power on the component
Parameters: - target (ttbl.test_target) – target on which to act
- component (str) – name of the power controller we are modifying
-
-
class
ttbl.power.
daemon_c
(cmdline, precheck_wait=0, env_add=None, kws=None, path=None, name=None, pidfile=None, mkpidfile=True, paranoid=False)¶ Generic power controller to start daemons in the server machine
FIXME: document
Parameters: cmdline (list(str)) – command line arguments to pass to
subprocess.check_output()
; this is a list of strings, first being the path to the command, the rest being the arguments.All the entries in the list are templated with %(FIELD)s expansion, where each field comes either from the kws dictionary or the target’s metadata.
-
kws
= None¶ dictionary of keywords that can be use to template the command line with %(FIELD)S
-
exception
error_e
¶
-
exception
start_e
¶
-
verify
(target, component, cmdline_expanded)¶ Function that verifies if the daemon has started or not
For example, checking if a file has been created, etc
THIS MUST BE DEFINED
Examples:
>>> return os.path.exists(cmdline_expanded[0])
or
>>> return os.path.exists(self.pidfile % kws)
Returns: True if the daemon started, False otherwise
-
log_stderr
(target, component, stderrf=None)¶
-
on
(target, component)¶ Power on the component
Parameters: - target (ttbl.test_target) – target on which to act
- component (str) – name of the power controller we are modifying
-
-
class
ttbl.power.
socat_pc
(address1, address2, env_add=None, precheck_wait=0.2)¶ Generic power component that starts/stops socat as daemon
This class is meant to be subclassed for an implementation passing the actual addresses to use.
Parameters: - address1 (str) – first address for socat; will be templated
with %(FIELD)s to the target’s keywords and anything added in
daemon_c.kws
. - address2 (str) – second address for socat; templated as address1
- env_add (dict) – variables to add to the environment when running socat
- precheck_wait (float) – seconds to wait once starting before checking if the daemon is running; sometimes it dies after we check, so it is good to give it a wait.
This object (or what is derived from it) can be passed to a power interface for implementation, eg:
>>> ttbl.config.targets['TARGETNAME'].interface_add( >>> "power", >>> ttbl.power.interface( >>> ttbl.power.socat_pc(ADDR1, ADDR2) >>> ) >>> )
Upon power up, the socat daemon will be started, with it’s current directory set to the target’s state directory and a log file called after the component (NAME-socat.log). When powering off, the daemon is stopped.
Anything coming of the ipmitool’s stderr is sent to a file called NAME-socat.stderr.
Specifying addresses is very specific to the usage that it is to be done of it, but for example:
PTY,link=console-%(component)s.write,rawer!!CREATE:console-%(component)s.read
creates a PTY which will pass whatever is written to
console-COMPONENT.write
to the second address and whatever is read from it toconsole-COMPONENT.read
(!! serves like a bifurcator).Note you need to use rawer to ensure a clean pipe, otherwise the PTY later might add rs.
/dev/ttyS0,creat=0,rawer,b115200,parenb=0,cs8,bs1
opens a serial port and writes to it whatever is written to
console-COMPONENT.write
and anything that is read from the serial port will be appended to fileconsole-COMPONENT.read
.EXEC:'/usr/bin/ipmitool -H HOSTNAME -U USERNAME -E -I lanplus sol activate',sighup,sigint,sigquit
runs the program ipmitool, writes to stdin whatever is written to console-COMPONENT.write and whatever comes out of stdout is written to console-COMPONENT.read.
Be wary of adding options such crnl to remove extra CRs (r) before the newline (n) when using to implement consoles. The console channels are meant to be completely transparent.
For examples, look at
ttbl.console.serial_pc
andttbl.ipmi.sol_console_pc
.** Catchas and Tricks for debugging **
Sometimes it just dies and we are left wondering
prepend to EXEC strace -fo /tmp/strace.log, as in:
EXEC:'strace -fo /tmp/strace.log COMMANDTHATDIES'
find information in server’s
/tmp/strace.log
when power cycling or enable top level calling under strace; it gets helpful.
-
verify
(target, component, cmdline_expanded)¶ Function that verifies if the daemon has started or not
For example, checking if a file has been created, etc
THIS MUST BE DEFINED
Examples:
>>> return os.path.exists(cmdline_expanded[0])
or
>>> return os.path.exists(self.pidfile % kws)
Returns: True if the daemon started, False otherwise
- address1 (str) – first address for socat; will be templated
with %(FIELD)s to the target’s keywords and anything added in
-
class
ttbl.pc.
delay
(on=0, off=0)¶ Introduce artificial delays when calling on/off/get to allow targets to settle.
This is meant to be used in a stacked list of power implementations given to a power control interface.
-
on
(target, component)¶ Power on the component
Parameters: - target (ttbl.test_target) – target on which to act
- component (str) – name of the power controller we are modifying
-
-
class
ttbl.pc.
delay_til_file_gone
(poll_period=0.25, timeout=25, on=None, off=None, get=None)¶ Delay until a file dissapears.
This is meant to be used in a stacked list of power implementations given to a power control interface.
-
on
(target, component)¶ Power on the component
Parameters: - target (ttbl.test_target) – target on which to act
- component (str) – name of the power controller we are modifying
-
-
class
ttbl.pc.
delay_til_file_appears
(filename, poll_period=0.25, timeout=25, action=None, action_args=None)¶ Delay until a file appears.
This is meant to be used in a stacked list of power implementations given to a power control interface.
-
on
(target, component)¶ Power on the component
Parameters: - target (ttbl.test_target) – target on which to act
- component (str) – name of the power controller we are modifying
-
-
class
ttbl.pc.
delay_til_usb_device
(serial, when_powering_on=True, want_connected=True, poll_period=0.25, timeout=25, action=None, action_args=None)¶ Delay power-on until a USB device dis/appears.
This is meant to be used in a stacked list of power implementations given to a power control interface.
Parameters: - serial (str) – Serial number of the USB device to monitor
- when_powering_on (bool) – Check when powering on if True (default) or when powering off (if false)
- want_connected (bool) – when checking, we want the device to be connected (True) or disconnected (False)
- action (collections.Callable) – action to execute when the
device is not found, before waiting. Note the first parameter
passed to the action is the target itself and then any other
parameter given in
action_args
- action_args – tuple of parameters to pass to
action
.
-
exception
not_found_e
¶ Exception raised when a USB device is not found
-
backend
= None¶
-
on
(target, component)¶ Power on the component
Parameters: - target (ttbl.test_target) – target on which to act
- component (str) – name of the power controller we are modifying
-
class
ttbl.pc.
dlwps7
(_url, reboot_wait_s=0.5)¶ Implement a power control interface to the Digital Logger’s Web Power Switch 7
Parameters: - _url (str) –
URL describing the unit and outlet number, in the form:
http://USER:PASSWORD@HOST:PORT/OUTLETNUMBER
where USER and PASSWORD are valid accounts set in the Digital Logger’s Web Power Switch 7 administration interface with access to the OUTLETNUMBER.
- reboot_wait (float) – Seconds to wait in when power cycling an outlet from off to on (defaults to 0.5s) or after powering up.
Access language documented at http://www.digital-loggers.com/http.html.
If you get an error like:
Exception: Cannot find ‘<!– state=(?P<state>[0-9a-z][0-9a-z]) lock=[0-9a-z][0-9a-z] –>’ in power switch responsethis might be that you are going through a proxy that is messing up things. In some cases the proxy was messing up the authentication and imposing javascript execution that made the driver fail.
-
on
(target, component)¶ Power on the component
Parameters: - target (ttbl.test_target) – target on which to act
- component (str) – name of the power controller we are modifying
-
state_regex
= <_sre.SRE_Pattern object>¶
-
get
(target, component)¶ Get the power status for the outlet
The unit returns the power state when querying the
/index.htm
path…as a comment inside the HTML body of the respose. ChuckleSo we look for:
<!-- state=XY lock=ANY -->
XY is the hex bitmap of states against the outlet number. ANY is the hex lock bitmap (outlets that can’t change).
- _url (str) –
8.8.4.2. Controlling targets via IPMI¶
This module implements multiple objects that can be used to control a target’s power or serial console via IPMI.
-
class
ttbl.ipmi.
pci
(hostname)¶ Power controller to turn on/off a server via IPMI
Parameters: This is normally used as part of a power rail setup, where an example configuration in /etc/ttbd-production/conf_*.py that would configure the power switching of a machine that also has a serial port would look like:
>>> ... >>> target.interface_add("power", ttbl.power.interface( >>> ( "BMC", ttbl.ipmi.pci("bmc_admin:secret@server1.internal.net") ), >>> ... >>> )
Warning
putting BMCs on an open network is not a good idea; it is recommended they are only exposed to an infrastructure network
Params str hostname: USER[:PASSWORD]@HOSTNAME of where the IPMI BMC is located; see commonl.password_get()
for things PASSWORD can be to obtain the password from service providers.-
on
(target, component)¶ Power on the component
Parameters: - target (ttbl.test_target) – target on which to act
- component (str) – name of the power controller we are modifying
-
get
(target, component)¶ Return the component’s power state
Same parameters as
on()
Returns: power state: - True: powered on
- False: powered off
- None: this is a fake power unit, so it has no actual power state
-
pre_power_pos_setup
(target)¶ If target’s pos_mode is set to pxe, tell the BMC to boot off the network.
This is meant to be use as a pre-power-on hook (see
ttbl.power.interface
andttbl.test_target.power_on_pre_fns
).
-
-
class
ttbl.ipmi.
pci_ipmitool
(hostname)¶ Power controller to turn on/off a server via IPMI
Same as
pci
, but executing ipmitool in the shell instead of using a Python library.-
on
(target, _component)¶ Power on the component
Parameters: - target (ttbl.test_target) – target on which to act
- component (str) – name of the power controller we are modifying
-
get
(target, component)¶ Return the component’s power state
Same parameters as
on()
Returns: power state: - True: powered on
- False: powered off
- None: this is a fake power unit, so it has no actual power state
-
pre_power_pos_setup
(target)¶ If target’s pos_mode is set to pxe, tell the BMC to boot off the network.
This is meant to be use as a pre-power-on hook (see
ttbl.power.interface
andttbl.test_target.power_on_pre_fns
).
-
-
class
ttbl.ipmi.
pos_mode_c
(hostname, timeout=2, retries=3)¶ Power controller to redirect a machine’s boot to network upon ON
This can be used in the power rail of a machine that can be provisioned with Provisioning OS, instead of using pre power-on hooks (such as
pci_ipmitool.pre_power_pos_setup()
).When the target is being powered on, this will be called, and based if the value of the pos_mode property is pxe, the IPMI protocol will be used to tell the BMC to order the target to boot off the network with:
$ ipmitool chassis bootparam set bootflag force_pxe
otherwise, it’ll force to boot off the local disk with:
$ ipmitool chassis bootparam set bootflag force_disk
Note that for this to be succesful and remove the chance of race conditions, this has to be previous to the component that powers on the machine via the BMC.
-
on
(target, _component)¶ Power on the component
Parameters: - target (ttbl.test_target) – target on which to act
- component (str) – name of the power controller we are modifying
-
-
class
ttbl.ipmi.
sol_console_pc
(hostname, precheck_wait=0.5, chunk_size=5, interchunk_wait=0.1)¶ Implement a serial port over IPMI’s Serial-Over-Lan protocol
This class implements two interfaces:
power interface: to start an IPMI SoL recorder in the background as soon as the target is powered on.
The power interface is implemented by subclassing
ttbl.power.socat_pc
, which starts socat as daemon to serve as a data recorder and to pass data to the serial port from the read file. It is configured to to start ipmitool with the sol activate arguments which leaves it fowarding traffic back and forth.Anything read form the serial port is written to the console-NAME.read file and anything written to it is written to console-NAME.write file, which is sent to the serial port.
console interface: interacts with the console interface by exposing the data recorded in console-NAME.read file and writing to the console-NAME.write file.
Params str hostname: USER[:PASSWORD]@HOSTNAME of where the IPMI BMC is located Look at
ttbl.console.generic_c
for a description of chunk_size and interchunk_wait. This is in general needed when whatever is behind SSH is not doing flow control and we want the server to slow down sending things.For example, create an IPMI recoder console driver and insert it into the power rail (its interface as power control makes it be called to start/stop recording when the target powers on/off) and then it is also registered as the target’s console:
>>> sol0_pc = ttbl.console.serial_pc(console_file_name) >>> >>> ttbl.config.targets[name].interface_add( >>> "power", >>> ttbl.power.interface( >>> ... >>> sol0_pc, >>> ... >>> ) >>> ttbl.config.targets[name].interface_add( >>> "console", >>> ttbl.console.interface( >>> sol0 = sol0_pc, >>> default = "sol0", >>> ) >>> )
-
on
(target, component)¶ Power on the component
Parameters: - target (ttbl.test_target) – target on which to act
- component (str) – name of the power controller we are modifying
-
class
ttbl.ipmi.
sol_ssh_console_pc
(hostname, ssh_port=22, chunk_size=5, interchunk_wait=0.1)¶ IPMI SoL over SSH console
This augments
ttbl.console.ssh_pc
in that it will first disable the SOL connection to avoid conflicts with other users.This forces the input into the SSH channel to the BMC to be chunked each five bytes with a 0.1 second delay in between. This seems to gives most BMCs a breather re flow control.
Params str hostname: USER[:PASSWORD]@HOSTNAME of where the IPMI BMC is located -
on
(target, component)¶ Power on the component
Parameters: - target (ttbl.test_target) – target on which to act
- component (str) – name of the power controller we are modifying
-
-
class
ttbl.raritan_emx.
pci
(url, outlet_number, https_verify=True)¶ Power control interface for the Raritan EMX family of PDUs (eg: PX3-*)
Tested with a PX3-5190R with FW v3.3.10.5-43736
In any place in the TCF server configuration where a power control implementation is needed and served by this PDU, thus insert:
>>> import ttbl.raritan_emx >>> >>> ... >>> ttbl.raritan_emx.pci('https://USER:PASSWORD@HOSTNAME', OUTLET#)
Parameters: - url (str) –
URL to access the PDU in the form:
https://[USERNAME:PASSWORD@]HOSTNAME
Note the login credentials are optional, but must be matching whatever is configured in the PDU for HTTP basic authentication and permissions to change outlet state.
- outlet (int) – number of the outlet in the PDU to control; this is an integer 1-N (N varies depending on the PDU model)
- https_verify (bool) – (optional, default True) do or do not HTTPS certificate verification.
The RPC implementation is documented in https://help.raritan.com/json-rpc/emx/v3.4.0; while this driver uses the Raritan SDK driver, probably this is overkill–we could do the calls using JSON-RPC directly using jsonrpclib to avoid having to install the SDK, which is not packaged for easy redistribution and install.
Bill of materials
- a Raritan EMX-compatible PDU (such as the PX3)
- a network cable
- a connection to a network switch to which the server is also connected (nsN) – ideally this shall be an infrastructure network, isolated from a general use network and any test networks.
System setup
In the server
Install the Raritan’s SDK (it is not available as a PIP package) from https://www.raritan.com/support/product/emx (EMX JSON-RPC SDK):
$ wget http://cdn.raritan.com/download/EMX/version-3.5.0/EMX_JSON_RPC_SDK_3.5.0_45371.zip $ unzip -x EMX_JSON_RPC_SDK_3.5.0_45371.zip $ install -m 0755 -o root -g root -d /usr/local/lib/python2.7/site-packages/raritan $ cp -a emx-json-rpc-sdk-030500-45371/emx-python-api/raritan/* /usr/local/lib/python2.7/site-packages/raritan
As the Raritan SDK had to be installed manually away from PIP or distro package management, ensurePython to looks into /usr/local/lib/python2.7/site-packages for packages.
Add your server configuration in a /etc/ttbd-production/conf_00_paths.py:
sys.path.append("/usr/local/lib/python2.7/site-packages")
so it is parsed before any configuration that tries to import
ttbl.raritan_emx
.
Connecting the PDU
- Connect the PDU to the network
- Assign the right IP and ensure name resolution works; convention is to use a short name spN (for Switch Power number N)
- Configure a username/password with privilege to set the outlet state
- Configure the system to power up all outlets after power loss (this is needed so the infrastructure can bring itself up without intervention, as for example it is a good practice to connect the servers to switched outlets so they can be remotely controlled).
-
on
(_target, _component)¶ Power on the component
Parameters: - target (ttbl.test_target) – target on which to act
- component (str) – name of the power controller we are modifying
- url (str) –
-
class
ttbl.apc.
pci
(hostname, outlet, oid=None)¶ Power control driver for APC PDUs using SNMP
This is a very hackish implementation that attempts to reuqire the least setup possible. It hardcodes the OIDs and MIBs because APC’s MIBs are not publicly available and the setup becomes…complicated (please contribute a better one if you can help)
To use in any place where a power control element is needed:
>>> import ttbl.apc >>> >>> ... >>> ttbl.apc.pci("HOSTNAME", 4) >>>
for doing power control on APC PDU HOSTNAME on outlet 4.
Parameters: Tested with:
- AP7930
References used:
- https://tobinsramblings.wordpress.com/2011/05/03/snmp-tutorial-apc-pdus/
- http://mibs.snmplabs.com/asn1/POWERNET-MIB
- https://www.apc.com/shop/us/en/products/POWERNET-MIB-V4-3-1/P-SFPMIB431
- https://download.schneider-electric.com/files?p_enDocType=Firmware+-+Released&p_Doc_Ref=APC_POWERNETMIB_431&p_File_Name=powernet431.mib
System setup
configure an IP address (static, DHCP) to the APC PDU
in the web configuration:
- Administration > Network > SNMPv1 > access: enable
- Administration > Network > SNMPv1 > access control: set the community name private to Access Type write +
This driver currently supports no port, username or passwords – it is recommended to place these units in a private protected network until such support is added.
Finding OIDs, etc
FIXME: incomplete This is only need in a system to find out numbers, not needed in the servers:
Install the POWERNET-MIB:
$ mkdir -p ~/.snmp/mibs/ $ wget http://mibs.snmplabs.com/asn1/POWERNET-MIB -O ~/.snmp/mibs/ $ echo mibs POWERNET-MIB >> ~/.snmp/snmp.conf
Find OID for querying number of outlets:
$ snmptranslate -On POWERNET-MIB::sPDUOutletControlTableSize.0 .1.3.6.1.4.1.318.1.1.4.4.1.0
Find OID for controlling outlet #1:
$ snmptranslate -On POWERNET-MIB::sPDUOutletCtl.1 .1.3.6.1.4.1.318.1.1.4.4.2.1.3.1
-
table_size
= [4, 1, 0]¶ MIB for the command to list the number of tablets
Obtained with:
$ snmptranslate -On POWERNET-MIB::sPDUOutletControlTableSize.0 .1.3.6.1.4.1.318.1.1.4.4.1.0
-
pdu_outlet_ctl_prefix
= [4, 2, 1, 3]¶ MIB for the command to control outlets
Obtained with:
$ snmptranslate -On POWERNET-MIB::sPDUOutletCtl.1 .1.3.6.1.4.1.318.1.1.4.4.2.1.3.1
the last digit is the outlet number, 1..N.
-
oid
= [1, 3, 6, 1, 4, 1, 318, 1, 1, 4]¶ Main OID for the APC PDU (this can be changed with the oid parameter to the constructor)
-
get
(target, component)¶ Return the component’s power state
Same parameters as
on()
Returns: power state: - True: powered on
- False: powered off
- None: this is a fake power unit, so it has no actual power state
-
on
(target, component)¶ Power on the component
Parameters: - target (ttbl.test_target) – target on which to act
- component (str) – name of the power controller we are modifying
-
class
ttbl.pc_ykush.
ykush
(ykush_serial, port)¶ A power control implementation using an YKUSH switchable hub https://www.yepkit.com/products/ykush
This is mainly devices that are USB powered and the ykush hub is used to control the power to the ports.
Note this devices appears as a child connected to the YKUSH hub, with vendor/device IDs 0x04d8:f2f7 (Microchip Technology)
You can find the right one with lsusb.py -ciu:
usb1 1d6b:0002 09 2.00 480MBit/s 0mA 1IF (Linux 4.3.3-300.fc23.x86_64 xhci-hcd xHCI Host Controller 0000:00:14.0) hub 1-2 2001:f103 09 2.00 480MBit/s 0mA 1IF (D-Link Corp. DUB-H7 7-port USB 2.0 hub) hub 1-2.5 0424:2514 09 2.00 480MBit/s 2mA 1IF (Standard Microsystems Corp. USB 2.0 Hub) hub 1-2.5.4 04d8:f2f7 00 2.00 12MBit/s 100mA 1IF (Yepkit Lda. YKUSH YK20345) 1-2.5.4:1.0 (IF) 03:00:00 2EPs (Human Interface Device:No Subclass:None)
Note the Yepkit Ltd, YK20345; YK20345 is the serial number.
To avoid permission issues:
choose a Unix group that the daemon will be running under
add a UDEV rule to
/etc/udev/rules.d/90-tcf.rules
(or other name):# YKUSH power switch hubs SUBSYSTEM=="usb", ATTR{idVendor}=="04d8", ATTR{idProduct}=="f2f7", GROUP="GROUPNAME", MODE = "660"
restart UDEV, replug your hubs:
$ sudo udevadm control --reload-rules
Parameters: -
exception
notfound_e
¶
-
backend
= None¶
-
on
(target, _component)¶ Power on the component
Parameters: - target (ttbl.test_target) – target on which to act
- component (str) – name of the power controller we are modifying
-
get
(target, _component)¶ Return the component’s power state
Same parameters as
on()
Returns: power state: - True: powered on
- False: powered off
- None: this is a fake power unit, so it has no actual power state
-
plug
(target, _thing)¶ Plug thing into target
Caller owns both target and thing
Parameters: - target (ttbl.test_target) – target where to plug
- thing (ttbl.test_target) – thing to plug into target
-
unplug
(target, _thing)¶ Unplug thing from target
Caller owns target (not thing necessarily)
Parameters: - target (ttbl.test_target) – target where to unplug from
- thing (ttbl.test_target) – thing to unplug
-
class
ttbl.usbrly08b.
rly08b
(serial_number)¶ A power control implementation for the USB-RLY08B relay controller https://www.robot-electronics.co.uk/htm/usb_rly08btech.htm.
This serves as base for other drivers to implement
per relay power controllers
,USB pluggers as *thing* or power controllers
.This device offers eight relays for AC and DC. The relays being on or off are controlled by a byte-oriented serial protocol over an FTDI chip that shows as:
$ lsusb.py -iu ... 1-1.1.1 04d8:ffee 02 2.00 12MBit/s 100mA 2IFs (Devantech Ltd. USB-RLY08 00023456) 1-1.1.1:1.0 (IF) 02:02:01 1EP (Communications:Abstract (modem):AT-commands (v.25ter)) cdc_acm tty/ttyACM0 1-1.1.1:1.1 (IF) 0a:00:00 2EPs (CDC Data:) cdc_acm ...
Note the 00023456 is the serial number.
To avoid permission issues, it can either:
The default rules in most Linux platforms will make the device node owned by group dialout, so make the daemon have that supplementary GID.
add a UDEV rule to
/etc/udev/rules.d/90-ttbd.rules
(or other name):SUBSYSTEM == "tty", ENV{ID_SERIAL_SHORT} == "00023456", GROUP="GROUPNAME", MODE = "660"
restart udev*:
$ sudo udevadm control --reload-rules
replug your hubs so the rule is set.
Parameters: -
exception
not_found_e
¶
-
backend
= None¶
-
class
ttbl.usbrly08b.
pc
(serial_number, relay)¶ Power control implementation that uses a relay to close/open a circuit on on/off
-
on
(target, _component)¶ Power on the component
Parameters: - target (ttbl.test_target) – target on which to act
- component (str) – name of the power controller we are modifying
-
Implement a button press by closing/opening a relay circuit
Press a target’s button
Parameters: - target (ttbl.test_target) – target where the button is
- button (str) – name of button
Release a target’s button
Parameters: - target (ttbl.test_target) – target where the button is
- button (str) – name of button
-
class
ttbl.usbrly08b.
plugger
(serial_number, bank)¶ Implement a USB multiplexor/plugger that allows a DUT to be plugged to Host B and to Host A when unplugged. It follows it can work as a USB cutter if Host A is disconnected.
It also implements a power control implementation, so when powered off, it plugs-to-Host-B and when powered on, it plugs-to-host-A. Likewise, if Host B is disconnected, when off it is effectively disconnected. This serves to for example, connect a USB storage drive to a target that will be able to access it when turned on and when off, it can be connected to another machine that can, eg: use it to flash software.
This uses a
rly08b
relay bank to do the switching.Parameters: System setup details
A USB connection is four cables: VCC (red), D+ (white), D- (green), GND (black) plus a shielding wrapping it all.
A relay has three terminals; NO, C and NC. - ON means C and NC are connected - OFF means C and NO are connected
- (it is recommended to label the cable connected to NO as
OFF/PLUGGED and the one to NC as ON/UNPLUGGED)
We use the USB-RLY8B, which has eight individual relays, so we can switch two devices between two USB hosts each.
We connect the DUT’s cables and host cables as follows:
DUT1 pin Host A1/ON pin Host B1/OFF pin VCC (red) 1C VCC (red) 1NO VCC (red) 1NC D+ (white) 2C D+ (white) 2NO D+ (white) 2NC D- (green) 3C D- (green) 3NO D- (green) 3NC GND (black) 4C GND (black) 4NO GND (black) 4NC DUT2 pin Host A2/ON pin Host B1/OFF pin VCC (red) 5C VCC (red) 5NO VCC (red) 5NC D+ (white) 6C D+ (white) 6NO D+ (white) 6NC D- (green) 7C D- (green) 7NO D- (green) 7NC GND (black) 8C GND (black) 8NO GND (black) 8NC For example, to switch an Arduino 101 between a NUC and the TTBD server that flashes and controls it:
- DUT (C) is our Arduino 101,
- Host B (NC) is another NUC machine in the TCF infrastructure
- Host A (NO) is the TTBD server (via the YKUSH port)
For a pure USB cutter (where we don’t need the connection to a TTBD server on MCU boards that expose a separate debugging cable for power and flashing), we’d connect the USB port like:
- DUT (C) is the MCU’s USB port
- Host B (NC) is the NUC machine in the TCF infrastructure
- Host A (NO) is left disconnected
Note
switching ONLY the VCC and GND connections (always leave D+ and D- connected the the Host A to avoid Host B doing a data connection and only being used to supply power) does not work.
Host A still detects the power differential in D+ D- and thought there was a device; tried to enable it, failed and disabled the port.
Note
We can’t turn them on or off at the same time because the HW doesn’t allow to set a mask and we could override settings for the other ports we are not controlling here–another server process might be tweaking the other ports.
** Configuration details **
Example:
To connect a USB device from system A to system B, so power off means connected to B, power-on connected to A, add to the configuration:
>>> target.interface_add("power", ttbl.power.inteface( >>> ttbl.usbrly08b.plugger("00023456", 0) >>> ) >>> ...
Thus to connect to system B:
$ tcf acquire devicename $ tcf power-off devicename
Thus to connect to system A:
$ tcf power-on devicename
Example:
If system B is the ttbd server, then you can refine it to test the USB device is connecting/disconnecting.
To connect a USB drive to a target before the target is powered on (in this example, a NUC mini-PC with a USB drive connected to boot off it, the configuration block would be as:
>>> target.interface_add("power", ttbl.power.interface( >>> # Ensure the dongle is / has been connected to the server >>> ttbl.pc.delay_til_usb_device("7FA50D00FFFF00DD", >>> when_powering_on = False, >>> want_connected = True), >>> ttbl.usbrly08b.plugger("00023456", 0), >>> # Ensure the dongle disconnected from the server >>> ttbl.pc.delay_til_usb_device("7FA50D00FFFF00DD", >>> when_powering_on = True, >>> want_connected = False), >>> # power on the target >>> ttbl.pc.dlwps7("http://admin:1234@SPNAME/SPPORT"), >>> # let it boot >>> ttbl.pc.delay(2) >>> ) >>> ...
Note that the serial number 7FA50D00FFFF00DD is that of the USB drive and 00023456 is the serial number of the USB-RLY8b board which implements the switching (in this case we use bank 0 of relays, from 1 to 4).
Example:
An Arduino 101 is connected to a NUC mini-PC as a USB device using the thing interface that we can control from a script or command line:
In this case we create an interconnect that wraps all the targets together (the Arduino 101, the NUC) to indicate they operate together and the configuration block would be:
ttbl.config.interconnect_add(ttbl.test_target("usb__nuc-02__a101-04"), ic_type = "usb__host__device") ttbl.config.targets['nuc-02'].add_to_interconnect('usb__nuc-02__a101-04') ttbl.config.targets['a101-04'].add_to_interconnect('usb__nuc-02__a101-04') ttbl.config.targets['nuc-02'].thing_add('a101-04', ttbl.usbrly08b.plugger("00033085", 1))
Where 00033085 is the serial number for the USB-RLY8b which implements the USB plugging/unplugging (in this case we use bank 1 of relays, from 5 to 8)
-
plug
(target, _thing)¶ Plug thing into target
Caller owns both target and thing
Parameters: - target (ttbl.test_target) – target where to plug
- thing (ttbl.test_target) – thing to plug into target
-
unplug
(target, _thing)¶ Unplug thing from target
Caller owns target (not thing necessarily)
Parameters: - target (ttbl.test_target) – target where to unplug from
- thing (ttbl.test_target) – thing to unplug
-
on
(target, _component)¶ Power on the component
Parameters: - target (ttbl.test_target) – target on which to act
- component (str) – name of the power controller we are modifying
-
get
(target, _thing=None)¶ Parameters: - target (ttbl.test_target) – target where to unplug from
- thing (ttbl.test_target) – thing to unplug
Returns: True if thing is connected to target, False otherwise.
Daemons that can be started in the server as part of a power rail:
8.8.4.3. Power control module to start DHCP daemon when a network is powered on¶
-
ttbl.dhcp.
tftp_dir
= '/var/lib/tftpboot'¶ Directory where the TFTP tree is located
-
ttbl.dhcp.
syslinux_path
= '/usr/share/syslinux'¶ Directory where the syslinux tree is located
-
ttbl.dhcp.
template_rexpand
(text, kws)¶ Expand Python keywords in a template repeatedly until none are left.
if there are substitution fields in the config text, replace them with the keywords; repeat until there are none left (as some of the keywords might bring in new substitution keys).
Stop after ten iterations
-
ttbl.dhcp.
pxe_architectures
= {'efi-bc': {'copy_files': ['/usr/share/syslinux/efi64/', '/home/ttbd/public_html/x86_64/vmlinuz-tcf-live', '/home/ttbd/public_html/x86_64/initramfs-tcf-live'], 'boot_filename': 'syslinux.efi', 'rfc_code': '00:07'}, 'efi-x86_64': {'copy_files': ['/usr/share/syslinux/efi64/', '/home/ttbd/public_html/x86_64/vmlinuz-tcf-live', '/home/ttbd/public_html/x86_64/initramfs-tcf-live'], 'boot_filename': 'syslinux.efi', 'rfc_code': '00:09'}, 'x86': {'copy_files': ['/usr/share/syslinux/lpxelinux.0', '/usr/share/syslinux/ldlinux.c32'], 'boot_filename': 'lpxelinux.0', 'rfc_code': '00:00'}}¶ List of PXE architectures we support
This is a dictionary keyed by architecture name (ARCHNAME); the value is a dictionary keyed by the following keywords
rfc_code
(str) a hex string in the format “HH:HH”, documenting a PXE architecture as described in https://datatracker.ietf.org/doc/rfc4578/?include_text=1 (section 2.1).This is used directly for the ISC DHCP configuration of the option architecture-type:
Code Arch Name Description ----- ----------- -------------------- 00:00 x86 Intel x86PC 00:01 NEC/PC98 00:02 EFI Itanium 00:03 DEC Alpha 00:04 Arc x86 00:05 Intel Lean Client 00:06 EFI IA32 00:07 efi-bc EFI BC (byte code) 00:08 EFI Xscale 00:09 efi-x86_64 EFI x86-64
boot_filename
(str): name of the file sent over PXE to a target when it asks what to boot. This will be converted to TFTP path/ttbd-INSTANCE/ARCHNAME/BOOT_FILENAME
which will be requested by the target.copy_files
(list of str): list of files or directories that have to copy/rsynced toTFTPDIR/ttbd-INSTANCE/ARCHNAME
; everything needed for the client to bootBOOT_FILENAME
has to be listed here for them to be copied and made available over TFTP.This allows to patch this in runtime based on the site configuration and Linux distribution
The DHCP driver, when powered on, will create
TFTPDIR/ttbd-INSTANCE/ARCHNAME
, rsync the files or trees incopy_files
to it and then symlinkTFTPDIR/ttbd-INSTANCE/ARCHNAME/pxelinux.cfg
toTFTPDIR/ttbd-INSTANCE/pxelinux.cfg
(as the configurations are common to all the architectures).To extend in the system configuration, add to any server configuration file in
/etc/ttbd-INSTANCE/conf_*.py
; for example, to use another bootloader for eg,x86
:>>> import ttbl.dhcp >>> ... >>> ttbl.dhcp.pxe_architectures['x86']['copy_files'].append( >>> '/usr/local/share/syslinux/lpxelinux1.0`) >>> ttbl.dhcp.pxe_architectures['x86']['boot_file'] = 'lpxelinux1.0`
-
class
ttbl.dhcp.
pci
(if_addr, if_net, if_len, ip_addr_range_bottom, ip_addr_range_top, mac_ip_map=None, allow_unmapped=False, debug=False, ip_mode=4)¶ -
exception
error_e
¶
-
exception
start_e
¶
-
dhcpd_path
= '/usr/sbin/dhcpd'¶ This class implements a power control unit that can be made part of a power rail for a network interconnect.
When turned on, it would starts DHCP to provide on the network.
With a configuration such as:
import ttbl.dhcp ttbl.config.targets['nwa'].pc_impl.append( ttbl.dhcp.pci("fc00::61:1", "fc00::61:0", 112, "fc00::61:2", "fc00::61:fe", ip_mode = 6) )
It would start a DHCP IPv6 server on fc00::61:1, network fc0)::61:0/112 serving IPv6 address from :2 to :fe.
-
on
(target, _component)¶ Start DHCPd servers on the network interface described by target
-
exception
-
ttbl.dhcp.
pos_cmdline_opts
= {'tcf-live': ['initrd=%(pos_http_url_prefix)sinitramfs-%(pos_image)s ', 'rd.live.image', 'selinux=0', 'audit=0', 'ip=dhcp', 'root=/dev/nfs', 'rd.luks=0', 'rd.lvm=0', 'rd.md=0', 'rd.dm=0', 'rd.multipath=0', 'ro', 'plymouth.enable=0 ', 'loglevel=2']}¶ List of string with Linux kernel command options to be passed by the bootloader
-
ttbl.dhcp.
power_on_pre_pos_setup
(target)¶ Hook called before power on to setup TFTP to boot a target in Provisioning Mode
The DHCP server started by
ttbl.dhcp
is always configured to direct a target to PXE boot syslinux; this will ask the TFTP server for a config file for the mac address of the target.This function is called before powering on the target to create said configuration file; based on the value of the target’s pos_mode property, a config file that boots the Provisioning OS or that redirects to the local disk will be created.
8.8.4.4. Drivers to create targets on virtual machine using QEMU¶
Note
This deprecates all previous ttbl/tt_qemu*.py modules
These are all raw building block which requires extra configuration. To create targets, use functions such as:
These drivers implement different objects needed to implement targets that run as QEMU virtual machines:
pc
: a power rail controller to control a QEMU virtual machine running as a daemon, providing interfaces for starting/stopping, debugging, BIOS/kernel/initrd image flashing and exposes serial consoles that an be interacted with thettbl.console.generic_c
object.qmp_c
: an object to talk to QEMU’s control socketplugger_c
: an adaptor to plug physical USB devices to QEMU targets with thettbl.things
interfacenetwork_tap_pc
: a power rail controller to setup tap devices to a (virtual or physical) network device that represents a network.
-
class
ttbl.qemu.
qmp_c
(sockfile)¶ Simple handler for the Qemu Monitor Protocol that allows us to run basic QMP commands and report on status.
-
exception
exception
¶ Base QMP exception
-
exception
cant_connect_e
¶ Cannot connect to QMP socket; probably QEMU didn’t start
-
exception
-
class
ttbl.qemu.
pc
(qemu_cmdline, nic_model='e1000')¶ Manage QEMU instances exposing interfaces for TCF to control it
This object exposes:
- power control interface: to start / stop QEMU
- images interface: to specify the kernel / initrd / BIOS images
- debug interface: to manage the virtual CPUs, debug via GDB
A target can be created and this object attached to the multiple interfaces to expose said functionalities, like for example (pseudo code):
>>> target = ttbl.test_target("name") >>> qemu_pc = ttbl.qemu.pc([ "/usr/bin/qemu-system-x86_64", ... ]) >>> target.interface_add("power", ttbl.power.interface(qemu_pc)) >>> target.interface_add("debug", ttbl.debug.interface(qemu_pc)) >>> target.interface_add("images", ttbl.images.interface(qemu_pc))
For a complete, functional example see
conf_00_lib_pos.target_qemu_pos_add()
orconf_00_lib_mcu.target_qemu_zephyr_add()
.Parameters: - qemu_cmdline (list(str)) –
command line to start QEMU, specified as a list of [ PATH, ARG1, ARG2, … ].
Don’t add -daemonize! This way the daemon is part of the process tree and killed when we kill the parent process
Note this will be passed to
ttbl.power.daemon_c
, so ‘%(FIELD)s’ will be expanded with the tags and runtime properties of the target. - nic_model (str) – (optional) Network Interface Card emulation used to create a network interface.
General design notes
the driver will add command line to create a QMP access socket (to control the instance and launch the VM stopped), a GDB control socket and a PID file. Thus, don’t specify -qmp, -pidfile, -S or -gdb on the command line.
any command line is allowed as long as it doesn’t interfere with those.
as in all the TCF server code, these might be called from different processes who share the same configuration; hence we can’t rely on any runtime storage. Any runtime values that are needed are kept on a filesystem database (self.fsdb).
We start the process with Python’s subprocess.Popen(), but then it goes on background and if the parent dies, the main server process will reap it (as prctl() has set with SIG_IGN on SIGCHLD) and any subprocess of the main process might kill it when power-off is called on it.
ttbl.power.daemon_c
takes care of all that.
Firmware interface: What to load/run
The
images
interface can be used to direct QEMU to load BIOS/kernel/initrd images. Otherwise, it will execute stuff off the disk (if the command line is set correctly). This is done by setting the target properties:- qemu-image-bios
- qemu-image-kernel
- qemu-image-initrd
these can be set to the name of a file in the server’s namespace, normally off the user storage area (FIXME: implement this limit in the flash() interface, allow specifics for POS and such).
Additionally, qemu-image-kernel-args can be set to arguments to the kernel.
Serial consoles
This driver doesn’t intervene on serial consoles (if wanted or not). The way to create serial consoles is to add command line such as:
>>> cmdline += [ >>> "-chardev", "socket,id=NAME,server,nowait,path=%%(path)s/console-NAME.write,logfile=%%(path)s/console-NAME.read", >>> "-serial", "chardev:NAME" >>> ]
which makes QEMU write anything received on such serial console on file STATEDIR/console-NAME.read and send to the virtual machine anything written to socket STATEDIR/console-NAME.write (where STATEDIR is the target specific state directory).
From there, a
ttbl.console.generic_c
console interface implementation can be added:>>> target.interface_add("console", ttbl.console.interface( >>> ( "NAME", ttbl.console.generic_c() ) >>> )
the console handling code in
ttbl.console.generic_c
knows from NAME where the find the read/write files.Networking
Most networking can rely on TCF creating a virtual network device over a TAP device associated to each target and interconnect they are connected to–look at
ttbl.qemu.network_tap_pc
:>>> target = ttbl.test_target("name") >>> target.add_to_interconnect( >>> 'nwa', dict( >>> mac_addr = "02:61:00:00:00:05", >>> ipv4_addr = "192.168.97.5", >>> ipv6_addr = "fc00::61:05") >>> qemu_pc = ttbl.qemu.pc([ "/usr/bin/qemu-system-x86_64", ... ]) >>> target.interface_add("power", ttbl.power.interface( >>> ( "tuntap-nwa", ttbl.qemu.network_tap_pc() ), >>> qemu_pc, >>> )
the driver automatically adds the command line to add correspoding network devices for each interconnect the target is a member of.
It is possible to do NAT or other networking setups; the command line needs to be specified manually though.
-
flash
(target, images)¶ Flash images onto target
Parameters: - target (ttbl.test_target) – target where to flash
- images (dict) – dictionary keyed by image type of the files (in the servers’s filesystem) that have to be flashed.
The implementation assumes, per configuration, that this driver knows how to flash the images of the given type (hence why it was configured) and shall abort if given an unknown type.
If multiple images are given, they shall be (when possible) flashed all at the same time.
-
verify
(target, component, cmdline_expanded)¶ Function that verifies if the daemon has started or not
For example, checking if a file has been created, etc
THIS MUST BE DEFINED
Examples:
>>> return os.path.exists(cmdline_expanded[0])
or
>>> return os.path.exists(self.pidfile % kws)
Returns: True if the daemon started, False otherwise
-
on
(target, component)¶ Power on the component
Parameters: - target (ttbl.test_target) – target on which to act
- component (str) – name of the power controller we are modifying
-
debug_list
(target, component)¶ Provide debugging information about the component
Return None if not currently debugging in this component, otherwise a dicionary keyed by string with information.
If the dictionary is empty, it is assumed that the debugging is enabled but the target is off, so most services can’t be accessed.
Known fields:
- GDB: string describing the location of the GDB bridge associated to this component in the format PROTOCOL:ADDRESS:PORT (eg: tcp:some.host.name:4564); it shall be possible to feed this directly to the gdb target remote command.
-
debug_start
(target, components)¶ Put the components in debugging mode.
Note it might need a power cycle for the change to be effective, depending on the component.
Parameters: - target (ttbl.test_target) – target on which to operate
- components (list(str)) – list of components on which to operate
-
debug_stop
(target, components)¶ Take the components out of debugging mode.
Note it might need a power cycle for the change to be effective, depending on the component.
Parameters: - target (ttbl.test_target) – target on which to operate
- components (list(str)) – list of components on which to operate
-
debug_halt
(target, _components)¶ Halt the components’ CPUs
Note it might need a power cycle for the change to be effective, depending on the component.
-
debug_resume
(target, _components)¶ Resume the components’ CPUs
Note it might need a power cycle for the change to be effective, depending on the component.
-
debug_reset
(target, _components)¶ Reset the components’ CPUs
Note it might need a power cycle for the change to be effective, depending on the component.
-
class
ttbl.qemu.
plugger_c
(name, **kwargs)¶ Adaptor class to plug host-platform USB devices to QEMU VMs
Parameters: - name (str) – thing’s name
- kwargs (dict) –
parameters for
qmp_c.command()
’s device_add method, which for example, could be:- driver = “usb-host”
- hostbus = BUSNUMBER
- hostaddr = USBADDRESS
Sadly, there is no way to tell QEMU to hotplug a device by serial number, so according to docs, the only way to do it is hardcoding the device and bus number.
eg:
>>> ttbl.config.target_add( >>> ttbl.test_target("drive_34"), >>> tags = { }, >>> target_type = "usb_disk") >>> >>> ttbl.config.targets['qu04a'].interface_add( >>> "things", >>> ttbl.things.interface( >>> drive_34 = ttbl.qemu.plugger_c( >>> "drive_34", driver = "usb-host", hostbus = 1, hostaddr = 67), >>> usb_disk = "drive_34", # alias for a101_04 >>> ) >>> )
-
plug
(target, thing)¶ Plug thing into target
Caller owns both target and thing
Parameters: - target (ttbl.test_target) – target where to plug
- thing (ttbl.test_target) – thing to plug into target
-
unplug
(target, thing)¶ Unplug thing from target
Caller owns target (not thing necessarily)
Parameters: - target (ttbl.test_target) – target where to unplug from
- thing (ttbl.test_target) – thing to unplug
-
get
(target, thing)¶ Parameters: - target (ttbl.test_target) – target where to unplug from
- thing (ttbl.test_target) – thing to unplug
Returns: True if thing is connected to target, False otherwise.
-
class
ttbl.qemu.
network_tap_pc
¶ Creates a tap device and attaches it to an interconnect’s network device
A target declares connectivity to one or more interconnects; when this object is instantiated as part of the power rail:
>>> target.interface_add( >>> "power", >>> ttbl.power.interface( >>> ... >>> ( "tuntap-nwka", pc_network_tap() ), >>> ... >>> ) >>> )
because the component is called tuntap-nwka, the driver assumes it needs to tap to the interconnect nwka because that’s where the target is connected:
$ tcf list -vv TARGETNAME | grep -i nwka interconnects.nwka.ipv4_addr: 192.168.120.101 interconnects.nwka.ipv4_prefix_len: 24 interconnects.nwka.ipv6_addr: fc00::a8:78:65 interconnects.nwka.ipv6_prefix_len: 112 interconnects.nwka.mac_addr: 94:c6:91:1c:9e:d9
Upon powering on the target, the on() method will create a network interface and link it to the network interface that represents the interconnect has created when powering on (
conf_00_lib.vlan_pci
)–it will assign it in the TCF server the IP addresses described above.The name of the interface created is stored in a target property called as the component (for our example, it will be a property called tuntap-nwka) so that other components can use it:
For QEMU, for example, you need a command line such as:
-nic tap,model=ETH_NIC_MODEL,script=no,downscript=no,ifname=IFNAME
however, the
ttbl.qemu.pc
driver automatically recognizes a tuntap-NETWORKNAME is there and inserts the command line needed.
When the target is powered off, this component will just remove the interface.
-
on
(target, component)¶ Power on the component
Parameters: - target (ttbl.test_target) – target on which to act
- component (str) – name of the power controller we are modifying
8.8.4.5. Interaction with JTAGs and similar using OpenOCD¶
This module provides the building block to debug many boards with OpenOCD.
Class ttbl.openocd.pc
is a power controller that when added
to a power rail, will start OpenOCD and then allow to use it to
perform debug operations and flashing on the target’s board.
-
ttbl.openocd.
boards
= {'': {'interface': None, 'config': '', 'board': '', 'addrmap': ''}, 'arduino_101': {'targets': ['x86', 'arc'], 'board': None, 'addrmap': 'quark_se_a101', 'interface': 'interface/ftdi/flyswatter2.cfg', 'config': '\ninterface ftdi\nftdi_serial "%(serial_string)s"\n# Always needed, or openocd fails -100\nftdi_vid_pid 0x0403 0x6010\nsource [find board/quark_se.cfg]\n\nquark_se.quark configure -event gdb-attach {\n reset halt\n gdb_breakpoint_override hard\n}\n\nquark_se.quark configure -event gdb-detach {\n resume\n shutdown\n}\n', 'reset_halt_command': 'reset; reset halt', 'hack_reset_after_power_on': True}, 'frdm_k64f': {'board': None, 'addrmap': 'frdm_k64f', 'interface': None, 'config': 'interface cmsis-dap\ncmsis_dap_serial %(serial_string)s\nsource [find target/k60.cfg]\n', 'targets': ['arm'], 'write_command': 'flash write_image erase %(file)s %(address)s', 'target_id_names': {0: 'k60.cpu'}}, 'galileo': {'config': '\ninterface ftdi\n# Always needed, or openocd fails -100\nftdi_vid_pid 0x0403 0x6010\nftdi_serial "%(serial_string)s"\nsource [find board/quark_x10xx_board.cfg]\n', 'targets': ['x86'], 'addrmap': 'quark_x1000'}, 'nrf51': {'board': None, 'addrmap': 'nrf5x', 'interface': None, 'config': 'source [find interface/jlink.cfg]\njlink serial %(serial_string)s\ntransport select swd\nset WORKAREASIZE 0\nsource [find target/nrf51.cfg]\n', 'targets': ['arm'], 'write_command': 'program %(file)s verify'}, 'nrf52': {'board': None, 'addrmap': 'nrf5x', 'interface': None, 'config': '\nsource [find interface/jlink.cfg]\njlink serial %(serial_string)s\ntransport select swd\nset WORKAREASIZE 0\nsource [find target/nrf51.cfg]\n', 'targets': ['arm'], 'write_command': 'program %(file)s verify'}, 'nrf52840': {'board': None, 'addrmap': 'nrf5x', 'interface': None, 'config': '\nsource [find interface/jlink.cfg]\njlink serial %(serial_string)s\ntransport select swd\nset WORKAREASIZE 0\nsource [find target/nrf51.cfg]\n', 'targets': ['arm'], 'write_command': 'program %(file)s verify'}, 'qc10000_crb': {'hack_reset_after_power_on': True, 'config': '\ninterface ftdi\nftdi_serial "%(serial_string)s"\n# Always needed, or openocd fails -100\nftdi_vid_pid 0x0403 0x6010\n\nftdi_channel 0\nftdi_layout_init 0x0010 0xffff\nftdi_layout_signal nTRST -data 0x0100 -oe 0x0100\n\nsource [find board/quark_se.cfg]\n', 'targets': ['x86', 'arc'], 'addrmap': 'quark_se', 'reset_halt_command': 'reset; reset halt'}, 'quark_d2000_crb': {'config': '\ninterface ftdi\nftdi_serial "%(serial_string)s"\n# Always needed, or openocd fails -100\nftdi_vid_pid 0x0403 0x6014\nftdi_channel 0\n\nftdi_layout_init 0x0000 0x030b\nftdi_layout_signal nTRST -data 0x0100 -noe 0x0100\nftdi_layout_signal nSRST -data 0x0200 -oe 0x0200\n\n\n# default frequency but this can be adjusted at runtime\n#adapter_khz 1000\nadapter_khz 6000\n\nreset_config trst_only\n\nsource [find target/quark_d20xx.cfg]\n', 'targets': ['x86'], 'addrmap': 'quark_d2000_crb'}, 'quark_d2000_crb_v8': {'config': '\ninterface ftdi\nftdi_serial "%(serial_string)s"\n# Always needed, or openocd fails -100\nftdi_vid_pid 0x0403 0x6014\nftdi_channel 0\n\nftdi_layout_init 0x0000 0x030b\nftdi_layout_signal nTRST -data 0x0100 -noe 0x0100\nftdi_layout_signal nSRST -data 0x0200 -oe 0x0200\n\n\n# default frequency but this can be adjusted at runtime\n#adapter_khz 1000\nadapter_khz 6000\n\nreset_config trst_only\n\nsource [find target/quark_d2000.cfg]\n', 'targets': ['x86'], 'addrmap': 'quark_d2000_crb'}, 'quark_se_ctb': {'board': 'quark_se', 'addrmap': 'quark_se', 'interface': None, 'config': '\ninterface ftdi\nftdi_serial "%(serial_string)s"\n# Always needed, or openocd fails -100\nftdi_vid_pid 0x0403 0x6010\n\n# oe_n 0x0200\n# rst 0x0800\n\nftdi_channel 0\nftdi_layout_init 0x0000 0xffff\nftdi_layout_signal nTRST -data 0x0100 -oe 0x0100\n', 'targets': ['x86', 'arc'], 'hack_reset_after_power_on': True}, 'sam_e70_xplained': {'board': None, 'addrmap': 'sam_e70_xplained', 'interface': None, 'config': 'interface cmsis-dap\ncmsis_dap_serial %(serial_string)s\nsource [find target/atsamv.cfg]\n', 'targets': ['arm'], 'write_command': 'flash write_image erase %(file)s %(address)s', 'target_id_names': {0: 'atsame70q21.cpu'}}, 'sam_v71_xplained': {'board': None, 'addrmap': 'sam_v71_xplained', 'interface': None, 'config': 'interface cmsis-dap\ncmsis_dap_serial %(serial_string)s\nsource [find target/atsamv.cfg]\n', 'targets': ['arm'], 'write_command': 'flash write_image erase %(file)s %(address)s', 'target_id_names': {0: 'samv71.cpu'}}, 'snps_em_sk': {'board': None, 'addrmap': 'snps_em_sk', 'interface': None, 'config': 'interface ftdi\nftdi_serial "%(serial_string)s"\n# Always needed, or openocd fails -100\nftdi_vid_pid 0x0403 0x6014\nsource [find board/snps_em_sk.cfg]\n', 'targets': ['arc'], 'target_id_names': {0: 'arc-em.cpu'}}}¶ Board description dictionary
This is a dictionary keyed by board / MCU name; when the OpenOCD driver is loaded, it is given this name and the entry is opened to get some operation values.
Each entry is another dictionary of key/value where key is a string, value is whatever.
FIXME: many missing
hack_reset_halt_after_init
-
class
ttbl.openocd.
action_logadapter_c
(logger, extra)¶ -
process
(msg, kwargs)¶ Process the logging message and keyword arguments passed in to a logging call to insert contextual information. You can either manipulate the message itself, the keyword args or both. Return the message and kwargs modified (or not) to suit your needs.
Normally, you’ll only need to override this one method in a LoggerAdapter subclass for your specific needs.
-
-
class
ttbl.openocd.
pc
(serial, board, debug=False, openocd_path='/usr/bin/openocd', openocd_scripts='/usr/share/openocd/scripts')¶ Parameters: - serial (str) – serial number of the target board; this is usually a USB serial number.
- board (str) – name of the board we are connecting against;
this has to be defined in
boards
orboard_synonyms
. - debug (bool) – (optional) run OpenOCD in debugging mode, printing extra information to the log (default False).
target ID
OpenOCD will operate on targets (different to TCF’s targets); these might one or more CPUs in the debugged system. Each has an ID, which by default is zero.
component to OpenOCD target mapping
Each component configured in the target addition maps to an OpenOCD target in boards[X][targets].
**OLD OLD **
This is a flasher object that uses OpenOCD to provide flashing and GDB server support.
The object starts an OpenOCD instance (that runs as a daemon) – it does this behaving as a power-control implementation that is plugged at the end of the power rail.
To execute commands, it connects to the daemon via TCL and runs them using the
'capture "OPENOCDCOMMAND"'
TCL command (FIXME: is there a better way?). The telnet port is open for manual debugging (check your firewall! no passwords!); the GDB ports are also available.The class knows the configuration settings for different boards (as given in the board_name parameter. It is also possible to point it to specific OpenOCD paths when different builds / versions need to be used.
Note how entry points from the flasher_c class all start with underscore. Functions
__SOMETHING()
are those that have to be called with a_expect_mgr
context taken [see comments on__send_command
for the reason.Parameters: board_name (str) – name of the board to use, to select proper configuration parameters. Needs to be declared in ttbl.flasher.openocd_c._boards. When starting OpenOCD, run a reset halt immediately after. This is used when flashing, as we power cycle before to try to have the target in a proper state–we want to avoid it running any code that might alter the state again.
Now, this is used in combination with another setting, board specific, that says if the reset has to be done or not in method :meth:_power_on_do_openocd_verify().
But why? Because some Quark SE targets, when put in deep sleep mode, OpenOCD is unable to reset halt them, returning something like:
> reset halt JTAG tap: quark_se.cltap tap/device found: 0x0e765013 (mfg: 0x009 (Intel), part: 0xe765, ver: 0x0) Enabling arc core tap JTAG tap: quark_se.arc-em enabled Enabling quark core tap JTAG tap: quark_se.quark enabled target is still running! target running, halt it first quark_se_target_reset could not write memory in procedure ‘reset’ called at file “command.c”, line 787So what we are trying to do, and it is a horrible hack, is to hopefully catch the CPU before it gets into that mode, and when it does, it bails out if it fails to reset and restarts OpenOCD and maybe (maybe) it at some point will get it.
Now, this is by NO MEANS a proper fix. The right fix would be for OpenOCD to be able to reset in any circumstance (which it doesn’t). An alternative would be to find some kind of memory location OpenOCD can write to that will take the CPU out of whichever state it gets stuck at which we can run when we see that.
Zephyr’s sample samples/board/quark_se/power_mgr is very good at making this happen.
-
hard_recover_rest_time
= None¶ FIXME
-
hack_reset_after_power_on
= None¶ FIXME:
-
hack_reset_halt_after_init
= None¶ Inmediately after running the OpenOCD initialization sequence, reset halt the board.
This is meant to be used when we know we are power cycling before flashing. The board will start running as soon as we power it on, thus we ask OpenOCD to stop it inmediately after initializing. There is still a big window of time on which the board can get itself in a bad state by running its own code.
(bool, default False)
-
hack_reset_after_init
= None¶ Inmediately after running the OpenOCD initialization sequence, reset the board.
This is meant to be used for hacking some boards that don’t start properly OpenOCD unless this is done.
(bool, default False)
-
exception
error
¶
-
exception
expect_connect_e
¶
-
verify
(target, component, cmdline_expanded)¶ Function that verifies if the daemon has started or not
For example, checking if a file has been created, etc
THIS MUST BE DEFINED
Examples:
>>> return os.path.exists(cmdline_expanded[0])
or
>>> return os.path.exists(self.pidfile % kws)
Returns: True if the daemon started, False otherwise
8.8.4.6. Power control module to start a rsync daemon when a network is powered-on¶
-
class
ttbl.rsync.
pci
(address, share_name, share_path, port=873, uid=None, gid=None, read_only=True)¶ -
exception
error_e
¶
-
exception
start_e
¶
-
path
= '/usr/bin/rsync'¶ This class implements a power control unit that starts an rsync daemon to serve one path to a network.
Thus, when the associated target is powered on, the rsync daemon is started; when off, rsync is killed.
E.g.: an interconnect gets an rsync server to share some files that targets might use:
>>> ttbl.interface_add("power", ttbl.power.inteface( >>> ttbl.rsync.pci("192.168.43.1", 'images', >>> '/home/ttbd/images'), >>> vlan_pci() >>> ) >>> ...
-
on
(target, _component)¶ Start the daemon, generating first the config file
-
exception
8.8.4.7. Power control module to start a socat daemon when a network is powered-on¶
This socat daemon can provide tunneling services to allow targets to access outside isolated test networks via the server.
-
class
ttbl.socat.
pci
(proto, local_addr, local_port, remote_addr, remote_port)¶ -
exception
error_e
¶
-
exception
start_e
¶
-
path
= '/usr/bin/socat'¶ This class implements a power control unit that can forward ports in the server to other places in the network.
It can be used to provide for access point in the NUTs (Network Under Tests) for the testcases to access.
For example, given a NUT represented by
NWTARGET
which has an IPv4 address of 192.168.98.1 in the ttbd server, a port redirection from port 8080 to an external proxy server proxy-host.in.network:8080 would be implemented as:>>> ttbl.config.targets[NWTARGET].pc_impl.append( >>> ttbl.socat.pci('tcp', >>> '192.168.98.1', 8080, >>> 'proxy-host.in.network', 8080))
Then to facilitate the work of test scripts, it’d make sense to export tags that explain where the proxy is:
>>> ttbl.config.targets[NWTARGET].tags_update({ >>> 'ftp_proxy': 'http://192.168.98.1:8080', >>> 'http_proxy': 'http://192.168.98.1:8080', >>> 'https_proxy': 'http://192.168.98.1:8080', >>> })
-
on
(target, _component)¶ Power on the component
Parameters: - target (ttbl.test_target) – target on which to act
- component (str) – name of the power controller we are modifying
-
exception
8.8.5. Other interfaces¶
8.8.5.1. Press target’s buttons¶
A target that has physical buttons that can be pressed can be instrumented so they can be pressed/released. This interface provides means to access said interface.
A target will offer the interface
to
press each button, each of which is implemented by different instances
of ttbl.buttons.impl_c
.
Implementation interface for a button driver
A button can be pressed or it can be released; it’s current state can be obtained.
Press a target’s button
Parameters: - target (ttbl.test_target) – target where the button is
- button (str) – name of button
Release a target’s button
Parameters: - target (ttbl.test_target) – target where the button is
- button (str) – name of button
Get a target’s button state
Parameters: - target (ttbl.test_target) – target where the button is
- button (str) – name of button
Returns: True if pressed, False otherwise.
Buttons interface to the core target API
This provides access to all of the target’s buttons, independent of their implementation, so they can be pressed, released or their state queried.
An instance of this gets added as an object to the main target with:
>>> ttbl.config.targets['android_tablet'].interface_add( >>> "buttons", >>> ttbl.buttons.interface( >>> power = ttbl.usbrly08b.button("00023456", 4), >>> vol_up = ttbl.usbrly08b.button("00023456", 3), >>> vol_down = ttbl.usbrly08b.button("00023456", 2), >>> ) >>> )
where in this case the buttons are implemented with an USB-RLY08B relay board.
This for example, can be used to instrument the power, volume up and volume down button of a tablet to control power switching. In the case of most Android tablets, the power rail then becomes:
>>> target.interface_add("power", ttbl.power.interface( >>> ttbl.buttons.pci_buttons_released( >>> [ "vol_up", "vol_down", "power" ]), >>> ttbl.buttons.pci_button_sequences( >>> sequence_off = [ >>> ( 'power', 'press' ), >>> ( 'vol_down', 'press' ), >>> ( 'resetting', 11 ), >>> ( 'vol_down', 'release' ), >>> ( 'power', 'release' ), >>> ], >>> sequence_on = [ >>> ( 'power', 'press' ), >>> ( 'powering', 5 ), >>> ( 'power', 'release' ), >>> ] >>> ), >>> ttbl.pc.delay_til_usb_device("SERIALNUMBER"), >>> ttbl.adb.pci(4036, target_serial_number = "SERIALNUMBER"), >>> )) >>> >>> ttbl.config.targets['android_tablet'].interface_add( >>> "buttons", >>> ttbl.buttons.interface( >>> power = ttbl.usbrly08b.button("00023456", 4), >>> vol_up = ttbl.usbrly08b.button("00023456", 3), >>> vol_down = ttbl.usbrly08b.button("00023456", 2) >>> ) >>> )
Parameters: impls (dict) – dictionary keyed by button name and which values are instantiation of button drivers inheriting from
ttbl.buttons.impl_c
.Names have to be valid python symbol names.
Execute a sequence of button actions on a target
The sequence argument has to be a list of pairs:
- ( ‘press’, BUTTON-NAME)
- ( ‘release’, BUTTON-NAME)
- ( ‘wait’, NUMBER-OF-SECONDS)
Execute a sequence of button actions on a target
The sequence argument has to be a list of pairs:
- ( ‘press’, BUTTON-NAME)
- ( ‘release’, BUTTON-NAME)
- ( ‘wait’, NUMBER-OF-SECONDS)
List buttons on a target and their state
Power control implementation that clicks a button as a step to power on or off something on a target.
Power on the component
Parameters: - target (ttbl.test_target) – target on which to act
- component (str) – name of the power controller we are modifying
Power off the component
Same parameters as
on()
Return the component’s power state
Same parameters as
on()
Returns: power state: - True: powered on
- False: powered off
- None: this is a fake power unit, so it has no actual power state
Power control implementation that executest a button sequence on power on, another on power off.
Power on the component
Parameters: - target (ttbl.test_target) – target on which to act
- component (str) – name of the power controller we are modifying
Power off the component
Same parameters as
on()
Return the component’s power state
Same parameters as
on()
Returns: power state: - True: powered on
- False: powered off
- None: this is a fake power unit, so it has no actual power state
Power control implementation that ensures a list of buttons are released (not pressed) before powering on a target.
Power on the component
Parameters: - target (ttbl.test_target) – target on which to act
- component (str) – name of the power controller we are modifying
Power off the component
Same parameters as
on()
Return the component’s power state
Same parameters as
on()
Returns: power state: - True: powered on
- False: powered off
- None: this is a fake power unit, so it has no actual power state
8.8.5.2. Stream and snapshot capture interace¶
This module implements an interface to capture things in the server and then return them to the client.
This can be used to, for example:
capture screenshots of a screen, by connecting the target’s output to a framegrabber, for example:
…
and then running somethig such as ffmpeg on its output
capture a video stream (with audio) when the controller can say when to start and when to end
capture network traffic with tcpdump
-
class
ttbl.capture.
impl_c
(stream, mimetype)¶ Implementation interface for a capture driver
The target will list the available capturers in the capture tag.
Parameters: -
start
(target, capturer)¶ If this is a streaming capturer, start capturing the stream
Usually starts a program that is active, capturing to a file until the
stop_and_get()
method is called.Parameters: - target (ttbl.test_target) – target on which we are capturing
- capturer (str) – name of this capturer
Returns: dictionary of values to pass to the client, usually nothing
-
stop_and_get
(target, capturer)¶ If this is a streaming capturer, stop streaming and return the captured data or take a snapshot and return it.
This stops the capture of the stream and return the file or take a snapshot capture and return.
Parameters: - target (ttbl.test_target) – target on which we are capturing
- capturer (str) – name of this capturer
Returns: dictionary of values to pass to the client, including the data; to stream a large file, include a member in this dictionary called stream_file pointing to the file’s path; eg:
>>> return dict(stream_file = CAPTURE_FILE)
-
-
class
ttbl.capture.
interface
(**impls)¶ Interface to capture something in the server related to a target
An instance of this gets added as an object to the target object with:
>>> ttbl.config.targets['qu05a'].interface_add( >>> "capture", >>> ttbl.capture.interface( >>> vnc0 = ttbl.capture.vnc(PORTNUMBER) >>> vnc0_stream = ttbl.capture.vnc_stream(PORTNUMBER) >>> hdmi0 = ttbl.capture.ffmpeg(...) >>> screen = "vnc0", >>> screen_stream = "vnc0_stream", >>> ) >>> )
Note how screen has been made an alias of vnc0 and screen_stream an alias of vnc0_stream.
Parameters: impls (dict) – dictionary keyed by capture name name and which values are instantiation of capture drivers inheriting from
ttbl.capture.impl_c
or names of other capturers (to sever as aliases).Names have to be valid python symbol names.
-
start
(who, target, capturer)¶ If this is a streaming capturer, start capturing the stream
Parameters: - who (str) – user who owns the target
- target (ttbl.test_target) – target on which we are capturing
- capturer (str) – capturer to use, as registered in
ttbl.capture.interface
.
Returns: dictionary of values to pass to the client
-
stop_and_get
(who, target, capturer)¶ If this is a streaming capturer, stop streaming and return the captured data or if no streaming, take a snapshot and return it.
Parameters: - who (str) – user who owns the target
- target (ttbl.test_target) – target on which we are capturing
- capturer (str) – capturer to use, as registered in
ttbl.capture.interface
.
Returns: dictionary of values to pass to the client
-
list
(target)¶ List capturers available on a target
Parameters: target (ttbl.test_target) – target on which we are capturing
-
request_process
(target, who, method, call, args, _files, _user_path)¶ Process a request into this interface from a proxy / brokerage
When the ttbd daemon is exporting access to a target via any interface (e.g: REST over Flask or D-Bus or whatever), this implements a brige to pipe those requests in to this interface.
Parameters: - target (test_target) – target upon which we are operating
- who (str) – user who is making the request
- method (str) – ‘POST’, ‘GET’, ‘DELETE’ or ‘PUT’ (mapping to HTTP requests)
- call (str) – interface’s operation to perform (it’d map to the different methods the interface exposes)
- args (dict) – dictionary of key/value with the arguments to the call, some might be JSON encoded.
- files (dict) – dictionary of key/value with the files uploaded via forms
(https://flask.palletsprojects.com/en/1.1.x/api/#flask.Request.form) :param str user_path: Path to where user files are located
Returns: dictionary of results, call specific e.g.: >>> dict( >>> result = "SOMETHING", # convention for unified result >>> output = "something", >>> value = 43 >>> )
For an example, see
ttbl.buttons.interface
.
-
-
class
ttbl.capture.
generic_snapshot
(name, cmdline, mimetype, pre_commands=None, extension='')¶ This is a generic snaptshot capturer which can be used to invoke any program that will do capture a snapshot.
For example, in a server configuration file, define a capturer that will connect to VNC and take a screenshot:
>>> capture_screenshot_vnc = ttbl.capture.generic_snapshot( >>> "%(id)s VNC @localhost:%(vnc_port)s", >>> # need to make sure vnc_port is defined in the target's tags >>> "gvnccapture -q localhost:%(vnc_port)s %(output_file_name)s", >>> mimetype = "image/png" >>> )
Then attach the capture interface to the target with:
>>> ttbl.config.targets['TARGETNAME'].interface_add( >>> "capture", >>> ttbl.capture.interface( >>> vnc0 = capture_screenshot_vnc, >>> ... >>> ) >>> )
Now the command:
$ tcf capture-get TARGETNAME vnc0 file.png
will download to
file.png
a capture of the target’s screen via VNC.Parameters: - name (str) –
name for error messages from this capturer.
E.g.: %(id)s HDMI
- cmdline (str) –
commandline to invoke the capturing the snapshot.
E.g.: ffmpeg -i /dev/video-%(id)s; in this case udev has been configured to create a symlink called /dev/video-TARGETNAME so we can uniquely identify the device associated to screen capture for said target.
- mimetype (str) – MIME type of the capture output, eg image/png
- pre_commands (list) –
(optional) list of commands (str) to execute before the command line, to for example, set parameters eg:
>>> pre_commands = [ >>> # set some video parameter >>> "v4l-ctl -i /dev/video-%(id)s -someparam 45", >>> ]
Note all string parameters are %(keyword)s expanded from the target’s tags (as reported by tcf list -vv TARGETNAME), such as:
- output_file_name: name of the file where to dump the capture output; file shall be overwritten.
- id: target’s name
- type: target’s type
- … (more with tcf list -vv TARGETNAME)
Parameters: extension (str) – (optional) string to append to the filename, like for example, an extension. This is needed because some capture programs insist on guessing the file type from the file name and balk of there is no proper extension; eg:
>>> extension = ".png"
avoid adding the extension to the command name you are asking to execute, as the system needs to know the full file name.
System configuration
It is highly recommendable to configure udev to generate device nodes named after the target’s name, so make configuration simpler and isolate the system from changes in the device enumeration order.
For example, adding to /etc/udev/rules.d/90-ttbd.rules:
SUBSYSTEM == "video4linux", ACTION == "add", KERNEL=="video*", ENV{ID_SERIAL} == "SERIALNUMBER", SYMLINK += "video-TARGETNAME"
where SERIALNUMBER is the serial number of the device that captures the screen for TARGETNAME. Note it is recommended to call the video interface video-SOMETHING so that tools such as ffmpeg won’t be confused.
-
start
(target, capturer)¶ If this is a streaming capturer, start capturing the stream
Usually starts a program that is active, capturing to a file until the
stop_and_get()
method is called.Parameters: - target (ttbl.test_target) – target on which we are capturing
- capturer (str) – name of this capturer
Returns: dictionary of values to pass to the client, usually nothing
-
stop_and_get
(target, capturer)¶ If this is a streaming capturer, stop streaming and return the captured data or take a snapshot and return it.
This stops the capture of the stream and return the file or take a snapshot capture and return.
Parameters: - target (ttbl.test_target) – target on which we are capturing
- capturer (str) – name of this capturer
Returns: dictionary of values to pass to the client, including the data; to stream a large file, include a member in this dictionary called stream_file pointing to the file’s path; eg:
>>> return dict(stream_file = CAPTURE_FILE)
- name (str) –
-
class
ttbl.capture.
generic_stream
(name, cmdline, mimetype, pre_commands=None, wait_to_kill=1)¶ This is a generic stream capturer which can be used to invoke any program that will do capture the stream for a while.
For example, in a server configuration file, define a capturer that will record video with ffmpeg from a camera that is pointing to the target’s monitor or an HDMI capturer:
>>> capture_vstream_ffmpeg_v4l = ttbl.capture.generic_snapshot( >>> "%(id)s screen", >>> "ffmpeg -i /dev/video-%(id)s-0" >>> " -f avi -qscale:v 10 -y %(output_file_name)s", >>> mimetype = "video/avi", >>> wait_to_kill = 0.25, >>> pre_commands = [ >>> "v4l2-ctl -d /dev/video-%(id)s-0 -c focus_auto=0" >>> ] >>> )
Then attach the capture interface to the target with:
>>> ttbl.config.targets['TARGETNAME'].interface_add( >>> "capture", >>> ttbl.capture.interface( >>> hdmi0_vstream = capture_vstream_ffmpeg_v4l, >>> ... >>> ) >>> )
Now, when the client runs to start the capture:
$ tcf capture-start TARGETNAME hdmi0_vstream
will execute in the server the pre-commands:
$ v4l2-ctl -d /dev/video-TARGETNAME-0 -c focus_auto=0
and then start recording with:
$ ffmpeg -i /dev/video-TARGETNAME-0 -f avi -qscale:v 10 -y SOMEFILE
so that when we decide it is done, in the client:
$ tcf capture-get TARGETNAME hdmi0_vstream file.avi
it will stop recording and download the video file with the recording to file.avi.
Parameters: - name (str) – name for error messges from this capturer
- cmdline (str) – commandline to invoke the capturing of the stream
- mimetype (str) – MIME type of the capture output, eg video/avi
- pre_commands (list) – (optional) list of commands (str) to execute before the command line, to for example, set volumes.
- wait_to_kill (int) – (optional) time to wait since we send a SIGTERM to the capturing process until we send a SIGKILL, so it has time to close the capture file. Defaults to one second.
Note all string parameters are %(keyword)s expanded from the target’s tags (as reported by tcf list -vv TARGETNAME), such as:
- output_file_name: name of the file where to dump the capture output; file shall be overwritten.
- id: target’s name
- type: target’s type
- … (more with tcf list -vv TARGETNAME)
For more information, look at
ttbl.capture.generic_snapshot
.-
start
(target, capturer)¶ If this is a streaming capturer, start capturing the stream
Usually starts a program that is active, capturing to a file until the
stop_and_get()
method is called.Parameters: - target (ttbl.test_target) – target on which we are capturing
- capturer (str) – name of this capturer
Returns: dictionary of values to pass to the client, usually nothing
-
stop_and_get
(target, capturer)¶ If this is a streaming capturer, stop streaming and return the captured data or take a snapshot and return it.
This stops the capture of the stream and return the file or take a snapshot capture and return.
Parameters: - target (ttbl.test_target) – target on which we are capturing
- capturer (str) – name of this capturer
Returns: dictionary of values to pass to the client, including the data; to stream a large file, include a member in this dictionary called stream_file pointing to the file’s path; eg:
>>> return dict(stream_file = CAPTURE_FILE)
8.8.5.3. Interface for common debug operations¶
A target’s multiple components can expose each debugging interfaces; these allow some low level control of CPUs, access to JTAG functionality, etc in an abstract way based on capability.
Debugging can be started or stopped; depending on the driver this might mean some things are done to allow that debugging support to happen. In some components it might need a power cycle.
Each component can have its own driver
or one driver
might service multiple components (eg: in asymetrical SMP systems)
serviced by OpenOCD.
When the debug support for multiple components is implemented by the same driver, a single call will be made into it with the list of components it applies to.
The client side is implemented by target.debug
.
-
class
ttbl.debug.
impl_c
¶ Driver interface for a component’s debugging capabilities
The debug interface supports multiple components which will be called for the top level debug operations; they are implemented by an instance subclassed from this interface.
Objects that implement the debug interface then can be passed to the debug interface as implementations; for example, the QEMU driver exposes a debug interface and thus:
>>> qemu_pc = ttbl.qemu.pc(...) >>> ... >>> target.interface_add("debug", ttbl.debug.interface( >>> ( "x86", qemu_pc )))
this assumes that the QEMU driver object qemu_pc has been instantiated to implement an x86 virtual machine; thus the debug control for the virtual machine x86 is registered with the debug interface.
-
debug_list
(target, component)¶ Provide debugging information about the component
Return None if not currently debugging in this component, otherwise a dicionary keyed by string with information.
If the dictionary is empty, it is assumed that the debugging is enabled but the target is off, so most services can’t be accessed.
Known fields:
- GDB: string describing the location of the GDB bridge associated to this component in the format PROTOCOL:ADDRESS:PORT (eg: tcp:some.host.name:4564); it shall be possible to feed this directly to the gdb target remote command.
-
debug_start
(target, components)¶ Put the components in debugging mode.
Note it might need a power cycle for the change to be effective, depending on the component.
Parameters: - target (ttbl.test_target) – target on which to operate
- components (list(str)) – list of components on which to operate
-
debug_stop
(target, components)¶ Take the components out of debugging mode.
Note it might need a power cycle for the change to be effective, depending on the component.
Parameters: - target (ttbl.test_target) – target on which to operate
- components (list(str)) – list of components on which to operate
-
debug_halt
(target, components)¶ Halt the components’ CPUs
Note it might need a power cycle for the change to be effective, depending on the component.
-
debug_resume
(target, components)¶ Resume the components’ CPUs
Note it might need a power cycle for the change to be effective, depending on the component.
-
debug_reset
(target, components)¶ Reset the components’ CPUs
Note it might need a power cycle for the change to be effective, depending on the component.
-
debug_reset_halt
(target, components)¶ Reset and halt the components’ CPUs
Note it might need a power cycle for the change to be effective, depending on the component.
-
-
class
ttbl.debug.
interface
(*impls, **kwimpls)¶ Generic debug interface to start and stop debugging on a target.
When debug is started before the target is powered up, then upon power up, the debugger stub shall wait for a debugger to connect before continuing execution.
When debug is started while the target is executing, the target shall not be stopped and the debugging stub shall permit a debugger to connect and interrupt the target upon connection.
Each target provides its own debug methodolody; to find out how to connect, issue a debug-gdb command to find out where to connect to.
When a target has this capability, the interface can be added to the target specifying which actual object derived from
impl_c
implements the functionality, eg, for a target based on QEMU, QEMU provides a debug interface:>>> qemu_pc = ttbl.qemu.pc(...) >>> ... >>> target.interface_add("debug", >>> ttbl.tt_interface(**{ >>> 'x86': qemu_pc >>> })
See
conf_00_lib_pos.target_qemu_pos_add()
orconf_00_lib_mcu.target_qemu_zephyr_add()
for an example of this.-
get_list
(target, who, args, _files, _user_path)¶
-
put_start
(target, who, args, _files, _user_path)¶
-
put_stop
(target, who, args, _files, _user_path)¶
-
put_halt
(target, who, args, _files, _user_path)¶
-
put_resume
(target, who, args, _files, _user_path)¶
-
put_reset
(target, who, args, _files, _user_path)¶
-
put_reset_halt
(target, who, args, _files, _user_path)¶
-
8.8.5.4. Interface to provide flash the target using fastboot¶
-
class
ttbl.fastboot.
interface
(usb_serial_number, allowed_commands)¶ Interface to execute fastboot commands on target
An instance of this gets added as an object to the main target with something like:
>>> ttbl.config.targets['targetname'].interface_add( >>> "fastboot", >>> ttbl.fastboot.interface("R1J56L1006ba8b"), >>> { >>> # Allow a command called `flash_pos`; the command >>> # >>> # flash_pos partition_boot /home/ttbd/partition_boot.pos.img >>> # >>> # will be replaced with: >>> # >>> # flash partition_boot /home/ttbd/partition_boot.pos.img >>> # >>> # anything else will be rejected >>> "flash_pos": [ >>> ( "flash_pos", "flash" ), >>> "partition_boot", >>> "/home/ttbd/partition_boot.pos.img" >>> ], >>> # Allow a command called `flash`; the command >>> # >>> # flash partition_boot FILENAME >>> # >>> # will be replaced with: >>> # >>> # flash partition_boot /var/lib/ttbd-INSTANCE/USERNAME/FILENAME >>> # >>> # anything else will be rejected >>> "flash": [ >>> "flash", >>> "partition_boot", >>> ( re.compile("^(.+)$"), "%USERPATH%/\g<1>" ) >>> ], >>> } >>> )
this allows to control which commands can be executed in the sever using fastboot, allowing access to the server’s user storage area (to which files can be uploaded using the tcf broker-upload command or
target.store.upload
).The server configuration will decide which commands can be executed or not (a quick list can be obtained with tcf fastboot-list TARGETNAME).
Parameters: - usb_serial_number (str) – serial number of the USB device
under which the target exposes the fastboot interface. E.g.:
"R1J56L1006ba8b"
. - allowed_commands (dict) – Commands that can be executed with
fastboot. See
interface.allowed_commands
.
-
allowed_commands
= None¶ Commands that can be executed with fastboot.
This is a KEY/VALUE list. Each KEY is a command name (which doesn’t necessarily need to map a fastboot command itself). The VALUE is a list of arguments to fastboot.
The user must send the same amount of arguments as in the VALUE list.
Each entry in the VALUE list is either a string or a regular expression. What ever the user sends must match the string or regular expression. Otherwise it will be rejected.
The entry can be a tuple ( STR|REGEX, REPLACEMENT ) that allows to replace what the user sends (using
re.sub()
). In the example above:>>> ( re.compile("^(.+)$"), "%USERPATH%/\\g<1>" )
it is meant to take a filename uploaded to the server’s user storage area. A match is done on the totality of the argument (ala: file name) and then
\\g<1>
in the substitution string is replaced by that match (group #1), to yield%USERPATH%/FILENAME
.Furthermore, the following substitutions are done on the final strings before passing the arguments to fastboot:
%USERPATH%
will get replaced by the current user path
Warning
There is a potential to exploit the system’s security if wide access is given to touch files or execute commands without filtering what the user is given. Be very restrictive about what commands and arguments are whitelisted.
-
path
= '/usr/bin/fastboot'¶ path to the fastboot binary
can be changed globally:
>>> ttbl.fastboot.interface.path = "/some/other/fastboot"
or for an specific instance
>>> ttbl.config.targets['TARGETNAME'].fastboot.path = "/some/other/fastboot"
-
put_run
(target, who, args, _files, user_path)¶ Run a fastboot command
Note we don’t allow any command execution, only what is allowed by
allowed_commands
, which might also filter the arguments based on the configuration.
-
get_list
(_target, _who, _args, _files, _user_path)¶
- usb_serial_number (str) – serial number of the USB device
under which the target exposes the fastboot interface. E.g.:
8.8.5.5. Flash binaries/images into the target¶
Interfaces and drivers to flash blobs/binaries/anything into targets; most commonly these are firmwares, BIOSes, configuration settings that are sent via some JTAG or firmware upgrade interface.
Interface implemented by ttbl.images.interface
,
drivers implemented subclassing ttbl.images.impl_c
.
-
class
ttbl.images.
impl_c
¶ Driver interface for flashing with
interface
-
flash
(target, images)¶ Flash images onto target
Parameters: - target (ttbl.test_target) – target where to flash
- images (dict) – dictionary keyed by image type of the files (in the servers’s filesystem) that have to be flashed.
The implementation assumes, per configuration, that this driver knows how to flash the images of the given type (hence why it was configured) and shall abort if given an unknown type.
If multiple images are given, they shall be (when possible) flashed all at the same time.
-
-
class
ttbl.images.
interface
(*impls, **kwimpls)¶ Interface to flash a list of images (OS, BIOS, Firmware…) that can be uploaded to the target server and flashed onto a target.
Any image type can be supported, it is up to the configuration to set the image types and the driver that can flash them. E.g.:
>>> target.interface_add( >>> "images", >>> ttbl.images.interface({ >>> "kernel-x86": ttbl.openocd.pc(), >>> "kernel-arc": "kernel-x86", >>> "rom": ttbl.images.dfu_c(), >>> "bootloader": ttbl.images.dfu_c(), >>> }) >>> )
Aliases can be specified that will refer to the another type; in that case it is implied that images that are aliases will all be flashed in a single call. Thus in the example above, trying to flash an image of each type would yield three calls:
- a single ttbl.openocd.pc.flash() call would be done for images kernel-x86 and kernel-arc, so they would be flashed at the same time.
- a single ttbl.images.dfu_c.flash() call for rom
- a single ttbl.images.dfu_c.flash() call for bootloader
If rom were an alias for bootloader, there would be a single call to ttbl.images.dfu_c.flash().
The imaging procedure might take control over the target, possibly powering it on and off (if power control is available). Thus, after flashing no assumptions shall be made and the safest one is to call (in the client)
target.power.cycle
to ensure the right state.-
put_flash
(target, who, args, _files, user_path)¶
-
get_list
(_target, _who, _args, _files, _user_path)¶
-
class
ttbl.images.
bossac_c
(serial_port=None, console=None)¶ Flash with the bossac tool
>>> target.interface_add( >>> "images", >>> ttbl.images.interface(**{ >>> "kernel-arm": ttbl.images.bossac_c(), >>> "kernel": "kernel-arm", >>> }) >>> )
Parameters: - serial_port (str) – (optional) File name of the device node representing the serial port this device is connected to. Defaults to /dev/tty-TARGETNAME.
- console (str) – (optional) name of the target’s console tied to the serial port; this is needed to disable it so this can flash. Defaults to serial0.
Requirements
Needs a connection to the USB programming port, represented as a serial port (TTY)
bossac has to be available in the path variable
path
.(for Arduino Due) uses the bossac utility built on the arduino branch from https://github.com/shumatech/BOSSA/tree/arduino:
$ git clone https://github.com/shumatech/BOSSA.git bossac.git $ cd bossac.git $ make -k $ sudo install -o root -g root bin/bossac /usr/local/bin
TTY devices need to be properly configured permission wise for bossac to work; for such, choose a Unix group which can get access to said devices and add udev rules such as:
# Arduino2 boards: allow reading USB descriptors SUBSYSTEM=="usb", ATTR{idVendor}=="2a03", ATTR{idProduct}=="003d", GROUP="GROUPNAME", MODE = "660" # Arduino2 boards: allow reading serial port SUBSYSTEM == "tty", ENV{ID_SERIAL_SHORT} == "SERIALNUMBER", GROUP = "GROUPNAME", MODE = "0660", SYMLINK += "tty-TARGETNAME"
For Arduino Due and others, the theory of operation is quite simple. According to https://www.arduino.cc/en/Guide/ArduinoDue#toc4, the Due will erase the flash if you open the programming port at 1200bps and then start a reset process and launch the flash when you open the port at 115200. This is not so clear in the URL above, but this is what expermientation found.
So for flashing, we’ll take over the console, set the serial port to 1200bps, wait a wee bit and then call bossac.
-
path
= '/usr/bin/bossac'¶ Path to bossac
Change with
>>> ttbl.images.bossac_c.path = "/usr/local/bin/bossac"
or for a single instance that then will be added to config:
>>> imager = ttbl.images.bossac_c.path(SERIAL) >>> imager.path = "/usr/local/bin/bossac"
-
flash
(target, images)¶ Flash images onto target
Parameters: - target (ttbl.test_target) – target where to flash
- images (dict) – dictionary keyed by image type of the files (in the servers’s filesystem) that have to be flashed.
The implementation assumes, per configuration, that this driver knows how to flash the images of the given type (hence why it was configured) and shall abort if given an unknown type.
If multiple images are given, they shall be (when possible) flashed all at the same time.
-
class
ttbl.images.
dfu_c
(usb_serial_number)¶ Flash the target with DFU util
>>> target.interface_add( >>> "images", >>> ttbl.images.interface(**{ >>> "kernel-x86": ttbl.images.dfu_c(), >>> "kernel-arc": "kernel-x86", >>> "kernel": "kernel-x86", >>> }) >>> )
Parameters: usb_serial_number (str) – target’s USB Serial Number Requirements
- Needs a connection to the USB port that exposes a DFU interface upon boot
- Uses the dfu-utils utility, available for most (if not all) Linux distributions
- Permissions to use USB devices in /dev/bus/usb are needed; ttbd usually roots with group root, which shall be enough.
- In most cases, needs power control for proper operation, but some MCU boards will reset on their own afterwards.
Note the tags to the target must include, on each supported BSP, a tag named dfu_interface_name listing the name of the altsetting of the DFU interface to which the image for said BSP needs to be flashed.
This can be found, when the device exposes the DFU interfaces with the lsusb -v command; for example, for a tinyTILE (output summarized for clarity):
$ lsusb -v ... Bus 002 Device 110: ID 8087:0aba Intel Corp. Device Descriptor: bLength 18 bDescriptorType 1 ... Interface Descriptor: bInterfaceClass 254 Application Specific Interface bInterfaceSubClass 1 Device Firmware Update... iInterface 4 x86_rom Interface Descriptor: bInterfaceClass 254 Application Specific Interface bInterfaceSubClass 1 Device Firmware Update... iInterface 5 x86_boot Interface Descriptor: bInterfaceClass 254 Application Specific Interface bInterfaceSubClass 1 Device Firmware Update iInterface 6 x86_app Interface Descriptor: bInterfaceClass 254 Application Specific Interface bInterfaceSubClass 1 Device Firmware Update iInterface 7 config Interface Descriptor: bInterfaceClass 254 Application Specific Interface bInterfaceSubClass 1 Device Firmware Update iInterface 8 panic Interface Descriptor: bInterfaceClass 254 Application Specific Interface bInterfaceSubClass 1 Device Firmware Update iInterface 9 events Interface Descriptor: bInterfaceClass 254 Application Specific Interface bInterfaceSubClass 1 Device Firmware Update iInterface 10 logs Interface Descriptor: bInterfaceClass 254 Application Specific Interface bInterfaceSubClass 1 Device Firmware Update iInterface 11 sensor_core Interface Descriptor: bInterfaceClass 254 Application Specific Interface bInterfaceSubClass 1 Device Firmware Update iInterface 12 ble_core
In this case, the three cores available are x86 (x86_app), arc (sensor_core) and ARM (ble_core).
Example
A Tiny Tile can be connected, without exposing a serial console:
>>> target = ttbl.test_target("ti-01") >>> target.interface_add( >>> "power", >>> ttbl.power.interface({ >>> ( "USB present", >>> ttbl.pc.delay_til_usb_device("5614010001031629") ), >>> }) >>> ) >>> target.interface_add( >>> "images", >>> ttbl.images.interface(**{ >>> "kernel-x86": ttbl.images.dfu_c("5614010001031629"), >>> "kernel-arm": "kernel-x86", >>> "kernel-arc": "kernel-x86", >>> "kernel": "kernel-x86" >>> }) >>> ) >>> ttbl.config.target_add( >>> target, >>> tags = { >>> 'bsp_models': { 'x86+arc': ['x86', 'arc'], 'x86': None, 'arc': None}, >>> 'bsps' : { >>> "x86": dict(zephyr_board = "tinytile", >>> zephyr_kernelname = 'zephyr.bin', >>> dfu_interface_name = "x86_app", >>> console = ""), >>> "arm": dict(zephyr_board = "arduino_101_ble", >>> zephyr_kernelname = 'zephyr.bin', >>> dfu_interface_name = "ble_core", >>> console = ""), >>> "arc": dict(zephyr_board = "arduino_101_sss", >>> zephyr_kernelname = 'zephyr.bin', >>> dfu_interface_name = 'sensor_core', >>> console = "") >>> }, >>> }, >>> target_type = "tinytile" >>> )
-
path
= '/usr/bin/dfu-tool'¶ Path to the dfu-tool
Change with
>>> ttbl.images.dfu_c.path = "/usr/local/bin/dfu-tool"
or for a single instance that then will be added to config:
>>> imager = ttbl.images.dfu_c.path(SERIAL) >>> imager.path = "/usr/local/bin/dfu-tool"
-
flash
(target, images)¶ Flash images onto target
Parameters: - target (ttbl.test_target) – target where to flash
- images (dict) – dictionary keyed by image type of the files (in the servers’s filesystem) that have to be flashed.
The implementation assumes, per configuration, that this driver knows how to flash the images of the given type (hence why it was configured) and shall abort if given an unknown type.
If multiple images are given, they shall be (when possible) flashed all at the same time.
-
class
ttbl.images.
esptool_c
(serial_port=None, console=None)¶ Flash a target using Tensilica’s esptool.py
>>> target.interface_add( >>> "images", >>> ttbl.images.interface(**{ >>> "kernel-xtensa": ttbl.images.esptool_c(), >>> "kernel": "kernel-xtensa" >>> }) >>> )
Parameters: - serial_port (str) – (optional) File name of the device node representing the serial port this device is connected to. Defaults to /dev/tty-TARGETNAME.
- console (str) – (optional) name of the target’s console tied to the serial port; this is needed to disable it so this can flash. Defaults to serial0.
Requirements
The ESP-IDF framework, of which
esptool.py
is used to flash the target; to install:$ cd /opt $ git clone --recursive https://github.com/espressif/esp-idf.git
(note the
--recursive
!! it is needed so all the submodules are picked up)configure path to it globally by setting
path
in a /etc/ttbd-production/conf_*.py file:import ttbl.tt ttbl.images.esptool_c.path = "/opt/esp-idf/components/esptool_py/esptool/esptool.py"
Permissions to use USB devices in /dev/bus/usb are needed; ttbd usually roots with group root, which shall be enough.
Needs power control for proper operation; FIXME: pending to make it operate without power control, using
esptool.py
.
The base code will convert the ELF image to the required bin image using the
esptool.py
script. Then it will flash it via the serial port.-
path
= '__unconfigured__ttbl.images.esptool_c.path__'¶ Path to esptool.py
Change with
>>> ttbl.images.esptool_c.path = "/usr/local/bin/esptool.py"
or for a single instance that then will be added to config:
>>> imager = ttbl.images.esptool_c.path(SERIAL) >>> imager.path = "/usr/local/bin/esptool.py"
-
flash
(target, images)¶ Flash images onto target
Parameters: - target (ttbl.test_target) – target where to flash
- images (dict) – dictionary keyed by image type of the files (in the servers’s filesystem) that have to be flashed.
The implementation assumes, per configuration, that this driver knows how to flash the images of the given type (hence why it was configured) and shall abort if given an unknown type.
If multiple images are given, they shall be (when possible) flashed all at the same time.
8.8.5.6. Interface to flash the target using ioc_flash_server_app¶
-
class
ttbl.ioc_flash_server_app.
interface
(tty_path)¶ Remote tool interface
An instance of this gets added as an object to the main target with:
>>> ttbl.config.targets['TARGETNAME'].interface_add( >>> "ioc_flash_server_app", >>> ttbl.ioc_flash_server_app.interface("/dev/tty-TARGETNAME-FW") >>> )
Where
/dev/tty-TARGETNAME-FW
is the serial line for the IOC firmware interface for TARGETNAME.Note this requires the Intel Platform Flash Tool installed in your system; this driver will expect the binary available in a location described by
path
.Parameters: tty_path (str) – path to the target’s IOC firmware serial port -
path
= '/opt/intel/platformflashtool/bin/ioc_flash_server_app'¶ path to the binary
can be changed globally:
>>> ttbl.ioc_flash_server_app.interface.path = "/some/other/ioc_flash_server_app"
or for an specific instance
>>> ttbl.config.targets['TARGETNAME'].ioc_flash_server_app._path = "/some/other/ioc_flash_server_app"
-
allowed_modes
= ('fabA', 'fabB', 'fabC', 'grfabab', 'grfabc', 'grfabd', 'grfabe', 'hadfaba', 'kslfaba', 'generic', 'w', 't')¶ allowed operation modes
these translate directly to the command line option
-MODE
- fabA
- fabB
- fabC
- grfabab
- grfabc
- grfabd
- grfabe
- hadfaba
- kslfaba
- generic (requires the generic_id parameter too)
- w
- t
-
put_run
(target, who, args, _files, user_path)¶
-
8.8.5.7. Connect targets to other targets¶
This module defines the interface to make targets connect to each other and control that process.
For example, you can have a target that implements a USB disk being connected to another via USB.
The interface to the target is the ttbl.things.interface
,
which delegates on the the different thing drivers
(ttbl.things.impl_c
) the implementation of the methodology to
plug or unplug the targets.
-
class
ttbl.things.
impl_c
(name=None, **kwargs)¶ Define how to plug a thing (which is a target) into a target
Each of this drivers implements the details that allows the thing to be plugged or unplugged from the target. For example:
- this might be controlling a relay that connects/disconnects the USB lines in a cable so that it emulates a human connecting/disconnecting
- this might be controlling a mechanical device which plugs/unplugs a cable
-
plug
(target, thing)¶ Plug thing into target
Caller owns both target and thing
Parameters: - target (ttbl.test_target) – target where to plug
- thing (ttbl.test_target) – thing to plug into target
-
unplug
(target, thing)¶ Unplug thing from target
Caller owns target (not thing necessarily)
Parameters: - target (ttbl.test_target) – target where to unplug from
- thing (ttbl.test_target) – thing to unplug
-
get
(target, thing)¶ Parameters: - target (ttbl.test_target) – target where to unplug from
- thing (ttbl.test_target) – thing to unplug
Returns: True if thing is connected to target, False otherwise.
-
class
ttbl.things.
interface
(*impls, **kwimpls)¶ Define how to plug things (targets) into other targets
A thing is a target that can be, in any form, connected to another target. For example, a USB device to a host, where both the US device and host are targets. This is so that we can make sure they are owned by someone before plugging, as it can alter state.
For the specificatio of impls and kwimpls, see
ttbl.tt_interface.impls_set()
, taking into account all implementations need to be objects derived fromttbl.things.impl_c
.-
get_list
(target, who, _args, _files, _user_path)¶
-
get_get
(target, who, args, _files, _user_path)¶ Plug thing into target
The user who is plugging must own this target and the thing.
-
put_plug
(target, who, args, _files, _user_path)¶ Plug thing into target
The user who is plugging must own this target and the thing.
-
put_unplug
(target, who, args, _files, _user_path)¶ Unplug thing from target
The user who is unplugging must own this target, but don’t necessary need to own the thing.
Note that when you release the target, all the things connected to it are released, even if you don’t own the things.
-
8.8.6. Common helper library¶
Common timo infrastructure and code Command line and logging helpers
FIXMEs
- This is still leaking temporary files (subpython’s stdout and stderr) when running top level tests.
-
commonl.
config_import_file
(filename, namespace='__main__', raise_on_fail=True)¶ Import a Python [configuration] file.
Any symbol available to the current namespace is available to the configuration file.
Parameters: - filename – path and file name to load.
- namespace – namespace where to insert the configuration file
- raise_on_fail (bool) – (optional) raise an exception if the importing of the config file fails.
>>> timo.config_import_file("some/path/file.py", "__main__")
-
commonl.
path_expand
(path_list)¶
-
commonl.
config_import
(path_list, file_regex, namespace='__main__', raise_on_fail=True)¶ Import Python [configuration] files that match file_regex in any of the list of given paths into the given namespace.
Any symbol available to the current namespace is available to the configuration file.
Parameters: - paths – list of paths where to import from; each item can be a list of colon separated paths and thus the list would be further expanded. If an element is the empty list, it removes the current list.
- file_regex – a compiled regular expression to match the file name against.
- namespace – namespace where to insert the configuration file
- raise_on_fail (bool) – (optional) raise an exception if the importing of the config file fails.
>>> timo.config_import([ ".config:/etc/config" ], >>> re.compile("conf[_-].*.py"), "__main__")
-
commonl.
logging_verbosity_inc
(level)¶
-
commonl.
logfile_open
(tag, cls=None, delete=True, bufsize=0, suffix='.log', who=None, directory=None)¶
-
commonl.
log_format_compose
(log_format, log_pid, log_time=False)¶
-
commonl.
cmdline_log_options
(parser)¶ Initializes a parser with the standard command line options to control verbosity when using the logging module
:param python:argparse.ArgParser parser: command line argument parser
-v|–verbose to increase verbosity (defaults to print/log errors only)
Note that after processing the command line options, you need to initialize logging with:
>>> import logging, argparse, timo.core >>> arg_parser = argparse.ArgumentParser() >>> timo.core.cmdline_log_options(arg_parser) >>> args = arg_parser.parse_args() >>> logging.basicConfig(format = args.log_format, level = args.level)
-
commonl.
mkid
(something, l=10)¶ Generate a 10 character base32 ID out of an iterable object
Parameters: something – anything from which an id has to be generate (anything iterable)
-
commonl.
trim_trailing
(s, trailer)¶ Trim trailer from the end of s (if present) and return it.
Parameters:
-
commonl.
name_make_safe
(name, safe_chars=None)¶ Given a filename, return the same filename will all characters not in the set [-_.0-9a-zA-Z] replaced with _.
Parameters:
-
commonl.
file_name_make_safe
(file_name, extra_chars=':/')¶ Given a filename, return the same filename will all characters not in the set [-_.0-9a-zA-Z] removed.
This is useful to kinda make a URL into a file name, but it’s not bidirectional (as it is destructive) and not very fool proof.
-
commonl.
hash_file
(hash_object, filepath, blk_size=8192)¶ Run a the contents of a file though a hash generator.
Parameters:
-
commonl.
request_response_maybe_raise
(response)¶
-
commonl.
os_path_split_full
(path)¶ Split an absolute path in all the directory components
-
commonl.
progress
(msg)¶ Print some sort of progress information banner to standard error output that will be overriden with real information.
This only works when stdout or stderr are not redirected to files and is intended to give humans a feel of what’s going on.
-
commonl.
digits_in_base
(number, base)¶ Convert a number to a list of the digits it would have if written in base @base.
- For example:
- (16, 2) -> [1, 6] as 1*10 + 6 = 16
- (44, 4) -> [2, 3, 0] as 2*4*4 + 3*4 + 0 = 44
-
commonl.
rm_f
(filename)¶ Remove a file (not a directory) unconditionally, ignore errors if it does not exist.
-
commonl.
makedirs_p
(dirname, mode=None)¶ Create a directory tree, ignoring an error if it already exists
Parameters:
-
commonl.
symlink_f
(source, dest)¶ Create a symlink, ignoring an error if it already exists
-
commonl.
process_alive
(pidfile, path=None)¶ Return if a process path/PID combination is alive from the standpoint of the calling context (in terms of UID permissions, etc).
Parameters: Returns: PID number if alive, None otherwise (might be running as a separate user, etc)
-
commonl.
process_terminate
(pid, pidfile=None, tag=None, path=None, wait_to_kill=0.25)¶ Terminate a process (TERM and KILL after 0.25s)
Parameters:
-
commonl.
process_started
(pidfile, path, tag=None, log=None, verification_f=None, verification_f_args=None, timeout=5, poll_period=0.3)¶
-
commonl.
origin_get
(depth=1)¶ Return the name of the file and line from which this was called
-
commonl.
origin_fn_get
(depth=1, sep=':')¶ Return the name of the function and line from which this was called
-
commonl.
kws_update_type_string
(kws, rt, kws_origin=None, origin=None, prefix='')¶ Given a dictionary, update the second only using those keys with string values
Parameters:
-
commonl.
kws_update_from_rt
(kws, rt, kws_origin=None, origin=None, prefix='')¶ Given a target’s tags, update the keywords valid for exporting and evaluation
This means filtering out things that are not strings and maybe others, decided in a case by case basis.
We make sure we fix the type and ‘target’ as the fullid.
-
commonl.
if_present
(ifname)¶ Return if network interface ifname is present in the system
Parameters: ifname (str) – name of the network interface to remove Returns: True if interface exists, False otherwise
-
commonl.
if_index
(ifname)¶ Return the interface index for ifname is present in the system
Parameters: ifname (str) – name of the network interface Returns: index of the interface, or None if not present
-
commonl.
if_find_by_mac
(mac, physical=True)¶ Return the name of the physical network interface whose MAC address matches mac.
Note the comparison is made at the string level, case insensitive.
Parameters: Returns: Name of the interface if it exists, None otherwise
-
commonl.
if_remove
(ifname)¶ Remove from the system a network interface using ip link del.
Parameters: ifname (str) – name of the network interface to remove Returns: nothing
-
commonl.
if_remove_maybe
(ifname)¶ Remove from the system a network interface (if it exists) using ip link del.
Parameters: ifname (str) – name of the network interface to remove Returns: nothing
-
commonl.
ps_children_list
(pid)¶ List all the PIDs that are children of a give process
Parameters: pid (int) – PID whose children we are looking for Returns: set of PIDs children of PID (if any)
-
commonl.
ps_zombies_list
(pids)¶ Given a list of PIDs, return which are zombies
Parameters: pids – iterable list of numeric PIDs Returns: set of PIDs which are zombies
-
commonl.
version_get
(module, name)¶
-
commonl.
tcp_port_busy
(port)¶
-
commonl.
tcp_port_assigner
(ports=1, port_range=(1025, 65530))¶
-
commonl.
tcp_port_connectable
(hostname, port)¶ Return true if we can connect to a TCP port
-
commonl.
conditional_eval
(tag, kw, conditional, origin, kind='conditional')¶ Evaluate an action’s conditional string to determine if it should be considered or not.
Returns bool: True if the action must be considered, False otherwise.
-
commonl.
check_dir
(path, what)¶
-
commonl.
check_dir_writeable
(path, what)¶
-
commonl.
prctl_cap_get_effective
()¶ Return an integer describing the effective capabilities of this process
-
commonl.
which
(cmd, mode=1, path=None)¶ Given a command, mode, and a PATH string, return the path which conforms to the given mode on the PATH, or None if there is no such file.
mode defaults to os.F_OK | os.X_OK. path defaults to the result of os.environ.get(“PATH”), or can be overridden with a custom search path.
-
commonl.
ttbd_locate_helper
(filename, log=<module 'logging' from '/usr/lib64/python2.7/logging/__init__.pyc'>, relsrcpath='')¶ Find the path to a TTBD file, depending on we running from source or installed system wide.
Parameters:
-
commonl.
raise_from
(what, cause)¶ Forward compath to Python 3’s raise X from Y
-
class
commonl.
dict_missing_c
(d, missing=None)¶ A dictionary that returns as a value a string KEY_UNDEFINED_SYMBOL if KEY is not in the dictionary.
This is useful for things like
>>> "%(idonthavethis)" % dict_missing_c({"ihavethis": True"}
to print “idonthavethis_UNDEFINED_SYMBOL” intead of raising KeyError
-
commonl.
ipv4_len_to_netmask_ascii
(length)¶
-
commonl.
password_get
(domain, user, password)¶ Get the password for a domain and user
This returns a password obtained from a configuration file, maybe accessing secure password storage services to get the real password. This is intended to be use as a service to translate passwords specified in config files, which in some time might be cleartext, in others obtained from services.
>>> real_password = password_get("somearea", "rtmorris", "KEYRING")
will query the keyring service for the password to use for user rtmorris on domain somearea.
>>> real_password = password_get("somearea", "rtmorris", "KEYRING:Area51")
would do the same, but keyring’s domain would be Area51 instead.
>>> real_password = password_get(None, "rtmorris", >>> "FILE:/etc/config/some.key")
would obtain the password from the contents of file /etc/config/some.key.
>>> real_password = password_get("somearea", "rtmorris", "sikrit")
would just return sikrit as a password.
Parameters: - domain (str) – a domain to which this password operation applies; see below password (can be None)
- user (str) – the username for maybe obtaining a password from a password service; see below password.
- password (str) –
a password obtained from the user or a configuration setting; can be None. If the password is
- KEYRING will ask the accounts keyring for the password
- for domain domain for username user
- KEYRING:DOMAIN will ask the accounts keyring for the password
- for domain DOMAIN for username user, ignoring the domain parameter.
- FILE:PATH will read the password from filename PATH.
Returns: the actual password to use
Password management procedures (FIXME):
to set a password in the keyring:
$ echo KEYRINGPASSWORD | gnome-keyring-daemon --unlock $ keyring set "USER" DOMAIN Password for 'DOMAIN' in 'USER': <ENTER PASSWORD HERE>
to be able to run the daemon has to be executed under a dbus session:
$ dbus-session -- sh $ echo KEYRINGPASSWORD | gnome-keyring-daemon --unlock $ ttbd...etc
-
commonl.
split_user_pwd_hostname
(s)¶ Return a tuple decomponsing
[USER[:PASSWORD]@HOSTNAME
Returns: tuple ( USER, PASSWORD, HOSTNAME ), None in missing fields. See
password_get()
for details on how the password is handled.
-
commonl.
url_remove_user_pwd
(url)¶ Given a URL, remove the username and password if any:
print(url_remove_user_pwd("https://user:password@host:port/path")) https://host:port/path
-
commonl.
field_needed
(field, projections)¶ Check if the name field matches any of the patterns (ala
fnmatch
).Parameters: Returns bool: True if field matches a pattern in patterns or if patterns is empty or None. False otherwise.
-
commonl.
dict_to_flat
(d, projections=None)¶ Convert a nested dictionary to a sorted list of tuples ( KEY, VALUE )
The KEY is like KEY[.SUBKEY[.SUBSUBKEY[….]]], where SUBKEY are keys in nested dictionaries.
Parameters: Returns list: sorted list of tuples KEY, VAL
-
commonl.
flat_slist_to_dict
(fl)¶ Given a sorted list of flat keys and values, convert them to a nested dictionary
Parameters: list((str,object)) – list of tuples of key and any value alphabetically sorted by tuple; same sorting rules as in flat_keys_to_dict()
.Return dict: nested dictionary as described by the flat space of keys and values
-
commonl.
flat_keys_to_dict
(d)¶ Given a dictionary of flat keys, convert it to a nested dictionary
Similar to
flat_slist_to_dict()
, differing in the keys/values being in a dictionary.A key/value:
>>> d["a.b.c"] = 34
means:
>>> d['a']['b']['c'] = 34
Key in the input dictonary are processed in alphabetical order (thus, key a.a is processed before a.b.c); later keys override earlier keys:
>>> d['a.a'] = 'aa' >>> d['a.a.a'] = 'aaa' >>> d['a.a.b'] = 'aab'
will result in:
>>> d['a']['a'] = { 'a': 'aaa', 'b': 'aab' }
The
>>> d['a.a'] = 'aa'
gets overriden by the other settings
Parameters: d (dict) – dictionary of keys/values Returns dict: (nested) dictionary
-
class
commonl.
tls_prefix_c
(tls, prefix)¶
-
commonl.
data_dump_recursive
(d, prefix=u'', separator=u'.', of=<open file '<stdout>', mode 'w'>, depth_limit=10)¶ Dump a general data tree to stdout in a recursive way
For example:
>>> data = [ dict(keya = 1, keyb = 2), [ "one", "two", "three" ], "hello", sys.stdout ]
produces the stdout:
[0].keya: 1 [0].keyb: 2 [1][0]: one [1][1]: two [1][2]: three [2]: hello [3]: <open file '<stdout>', mode 'w' at 0x7f13ba2861e0>
- in a list/set/tuple, each item is printed prefixing [INDEX]
- in a dictionary, each item is prefixed with it’s key
- strings and cardinals are printed as such
- others are printed as what their representation as a string produces
- if an attachment is a generator, it is iterated to gather the data.
- if an attachment is of :class:generator_factory_c, the method for creating the generator is called and then the generator iterated to gather the data.
See also
data_dump_recursive_tls()
Parameters: - d – data to print
- prefix (str) – prefix to start with (defaults to nothing)
- separator (str) – used to separate dictionary keys from the prefix (defaults to “.”)
- of (FILE) – output stream where to print (defaults to sys.stdout)
- depth_limit (int) – maximum nesting levels to go deep in the data structure (defaults to 10)
-
commonl.
data_dump_recursive_tls
(d, tls, separator=u'.', of=<open file '<stdout>', mode 'w'>, depth_limit=10)¶ Dump a general data tree to stdout in a recursive way
This function works as
data_dump_recursive()
(see for more information on the usage and arguments). However, it uses TLS for storing the prefix as it digs deep into the data structure.A variable called prefix_c is created in the TLS structure on which the current prefix is stored; this is meant to be used in conjunction with stream writes such as
io_tls_prefix_lines_c
.Parameters are as documented in
data_dump_recursive()
, except for:Parameters: tls (thread._local) – thread local storage to use (as returned by threading.local()
-
class
commonl.
io_tls_prefix_lines_c
(tls, *args, **kwargs)¶ Write lines to a stream with a prefix obtained from a thread local storage variable.
This is a limited hack to transform a string written as:
line1 line2 line3
into:
PREFIXline1 PREFIXline2 PREFIXline3
without any intervention by the caller other than setting the prefix in thread local storage and writing to the stream; this allows other clients to write to the stream without needing to know about the prefixing.
Note the lines yielded are unicode-escaped or UTF-8 escaped, for being able to see in reports any special character.
Usage:
import io import commonl import threading tls = threading.local() f = io.open("/dev/stdout", "w") with commonl.tls_prefix_c(tls, "PREFIX"), commonl.io_tls_prefix_lines_c(tls, f.detach()) as of: of.write(u"line1
line2 line3 “)
Limitations:
- hack, only works ok if full lines are being printed; eg:
-
flush
()¶ Flush any leftover data in the temporary buffer, write it to the stream, prefixing each line with the prefix obtained from self.tls’s prefix_c attribute.
-
write
(s)¶ Write string to the stream, prefixing each line with the prefix obtained from self.tls’s prefix_c attribute.
-
writelines
(itr)¶ Write the iterator to the stream, prefixing each line with the prefix obtained from self.tls’s prefix_c attribute.
-
commonl.
mkutf8
(s)¶
-
class
commonl.
generator_factory_c
(fn, *args, **kwargs)¶ Create generator objects multiple times
Given a generator function and its arguments, create it when
make_generator()
is called.>>> factory = generator_factory_c(genrator, arg1, arg2..., arg = value...) >>> ... >>> generator = factory.make_generator() >>> for data in generator: >>> do_something(data) >>> ... >>> another_generator = factory.make_generator() >>> for data in another_generator: >>> do_something(data)
generators once created cannot be reset to the beginning, so this can be used to simulate that behavior.
Parameters: - fn – generator function
- args – arguments to the generator function
- kwargs – keyword arguments to the generator function
-
make_generator
()¶ Create and return a generator
-
commonl.
file_iterator
(filename, chunk_size=4096)¶ Iterate over a file’s contents
Commonly used along with generator_factory_c to with the TCF client API to report attachments:
Parameters: chunk_size (int) – (optional) read blocks of this size (optional) >>> import commonl >>> >>> class _test(tcfl.tc.tc_c): >>> >>> def eval(self): >>> generator_f = commonl.generator_factory_c(commonl.file_iterator, FILENAME) >>> testcase.report_pass("some message", dict(content = generator_f))
This module implements a simple expression language.
The grammar for this language is as follows:
- expression ::= expression “and” expression
expression “or” expression“not” expression“(” expression “)”symbol “==” constantsymbol “!=” constantsymbol “<” numbersymbol “>” numbersymbol “>=” numbersymbol “<=” numbersymbol “in” listsymbollist ::= “[” list_contents “]”
- list_contents ::= constant
list_contents “,” constant- constant ::= number
string
When symbols are encountered, they are looked up in an environment dictionary supplied to the parse() function.
For the case where
expression ::= symbol
it evaluates to true if the symbol is defined to a non-empty string.
For all comparison operators, if the config symbol is undefined, it will be treated as a 0 (for > < >= <=) or an empty string “” (for == != in). For numerical comparisons it doesn’t matter if the environment stores the value as an integer or string, it will be cast appropriately.
Operator precedence, starting from lowest to highest:
or (left associative) and (left associative) not (right associative) all comparison operators (non-associative)
The ‘:’ operator compiles the string argument as a regular expression, and then returns a true value only if the symbol’s value in the environment matches. For example, if CONFIG_SOC=”quark_se” then
filter = CONFIG_SOC : “quark.*”
Would match it.
-
commonl.expr_parser.
t_HEX
(t)¶ 0x[0-9a-fA-F]+
-
commonl.expr_parser.
t_INTEGER
(t)¶ d+
-
commonl.expr_parser.
t_error
(t)¶
-
commonl.expr_parser.
p_expr_or
(p)¶ expr : expr OR expr
-
commonl.expr_parser.
p_expr_and
(p)¶ expr : expr AND expr
-
commonl.expr_parser.
p_expr_not
(p)¶ expr : NOT expr
-
commonl.expr_parser.
p_expr_parens
(p)¶ expr : OPAREN expr CPAREN
-
commonl.expr_parser.
p_expr_eval
(p)¶ expr : SYMBOL EQUALS const | SYMBOL NOTEQUALS const | SYMBOL GT number | SYMBOL LT number | SYMBOL GTEQ number | SYMBOL LTEQ number | SYMBOL IN list | SYMBOL IN SYMBOL | SYMBOL COLON STR
-
commonl.expr_parser.
p_expr_single
(p)¶ expr : SYMBOL
-
commonl.expr_parser.
p_list
(p)¶ list : OBRACKET list_intr CBRACKET
-
commonl.expr_parser.
p_list_intr_single
(p)¶ list_intr : const
-
commonl.expr_parser.
p_list_intr_mult
(p)¶ list_intr : list_intr COMMA const
-
commonl.expr_parser.
p_const
(p)¶ const : STR | number
-
commonl.expr_parser.
p_number
(p)¶ number : INTEGER | HEX
-
commonl.expr_parser.
p_error
(p)¶
-
commonl.expr_parser.
ast_sym
(ast, env)¶
-
commonl.expr_parser.
ast_sym_int
(ast, env)¶
-
commonl.expr_parser.
ast_expr
(ast, env)¶
-
commonl.expr_parser.
parse
(expr_text, env)¶ Given a text representation of an expression in our language, use the provided environment to determine whether the expression is true or false