The Scripting Module integrates both Octave and Python engines, allowing users to write and run calculations using either language within Valispace. The module is designed to perform complex calculations that are not possible through the standard ValiEngine.
The module is currently in beta and can be enabled in the Beta Features setting, as shown in Fig.1.
If users whish to manipulate any other objects other than numerical Valis, they must use the Valispace Python API. Examples use cases include:
Creating a value and bulk-adding it to multiple components
Making bulk edits to requirement identifiers
Running simulations using Python
Converting units of power values to kW.
Running a custom workflow behavior based on automated triggers
Scripting Module Flow
To use the module, the user creates a new script, adds inputs/outputs, writes code in .m or .py, and runs the code to get the desired output.
Creating a New Script
To create a new script click the "+ Script" option in the modules column. A dialogue box will appear to input the script name and select the engine (Octave or Python).
Users can also create additional Text, JSON, and YAML files for use in their scripts.
To use any of the extra created files users must include the following two lines of code at the top of the main.py file:
import site site.addsitedir('script_code/')
After these two lines any extra files can be called on using the standard import statement.
So as to not expose the use of necessary credentials when using scripts that connect to either our own API or that of other tools, it is possible to define personal secret variables that can be called upon script runtime and not risk exposing them to other users on the deployment by storing them in script files.
How and where to add them
Secrets can be defined in the Settings panel under User Secrets.
As the name implies, these secrets are unique to each user and only accessible by those who defined them.
How to use Secrets in a script
By importing the secret name from the “.settings” module, users can then use secrets for authentication within scripts.
In Fig.6 we can see the example of the USERNAME and PASSWORD secrets we defined in the previous subsection.
Since on runtime the scripts will fetch the value for these variables from the secrets defined in the user settings of the user who triggers the script, they are dependent on that user’s permissions. Scripts should not contain any output functions of these variables in order not to expose user secrets.
General use scripts set to be triggered with automations can be set up with authentication by an Admin level user who can remove read permissions from all other users, thus keeping the script hidden.
With the queue system, users can be certain that their scripts will always run, especially if pre-defined automations regularly trigger custom workflow scripts.
All script runs will now be saved on the deployment and can be consulted per script or for all scripts by selecting the “All scripting” option at the top of the Script Module tree. Since these tables are also making use of the AG Grid framework, it is possible to not only save custom views of these tables but to export any current table directly to .csv or Excel format file.
You can stop a running script by clicking the “Stop” button in the Actions dropdown menu.
As we can see in Fig.9, the status of a script’s run influences which actions we can trigger. From top to bottom, we can see one instance running which can be stopped (and then rerun), we can see the next Run successfully executed and no further run actions are possible, and the last one which had an error and can be rerun.
Another update to the overall integration of the Scripting Module is the ability to handle permissions for scripts via the main Valispace Front-End as it was previously only available through the Admin panel.
Users can set permissions in the module itself or through the Project Module’s Permissions tab.
Run scripts from a dashboard
Users can also create custom interaction dashboards with the use of “Run Script” buttons. These are similar to the previously available Request buttons used to trigger REST calls but can be configured to run either single or multiple scripts at the push of a button.
By using the Python API in the called scripts, custom interaction dashboards can be setup in which elements such as standard text boxes can be used as input and output fields for a script, which can then also, directly and indirectly, affect other elements on display.
A simple example is a custom counter (Fig.11) which updates two standard text boxes with the number of passed and failed test runs.
Script & Automations
Preset automations can trigger scripts. For example, a complex calculation can be set to run automatically if its defined inputs are altered. Additionally, by using the Valispace Python API, bespoke complex behavior can also be programmed to build custom workflows.
Not only will automations trigger scripts, but they will also pass on the information of which objects triggered them so that scripts can act directly on those objects. The object information is made available in the “kwargs” dictionary variable, under the ‘triggered_objects' key, as in the following example:
object_data = kwargs['triggered_objects']
Automations will also run scripts if triggered by users who do not have permissions to view said scripts, thus allowing for workflow customization and calculations by admin users while limiting access to underlying code, such as proprietary mathematical and physical models.
Example workflow scripts can be found in the valifn public repository’s templates folder.
Custom valifn images
On-prem users have the added feature of customizing their valifn instance to run any Python package they want, provided their server hardware can handle it. On-prem users will also have the ability to manage which valifn image is to be used on script run time.
As shown in Fig. 12, this option is available as a text field in the General Settings for each individual script.
How to create a custom image
Further instructions on how to set up your own valifn images can be found in the public repository’s documentation pages:
Python Script Examples
Usable script examples can be found both in a deployment’s Valicopter 5000 (new deployments) example project and in ValiFn’s Github public repository.
These examples can be triggered manually or by automations (exclusively in some cases) and can be either used as is or modified by users to perform any sort of bespoke behaviour.
Many of these examples were created as demonstrations of the possibilities of the scripting module and automations to create bespoke automated workflows. Users are encouraged to customize any existing script to their needs and to share any general purpose scripts and improvements to ValiFn’s public repository.
Children Suspect Warning (Automation)
This script is triggered by an automation which must be set to be triggered by a change to an existing requirement as the script acts on the properties of the edited requirement. Its currently defined action is to place a discussion in every of it’s children (if it has any) with a bespoke message regarding the change status of the parent requirement.
Other possible actions could be to create a task or add the edited requirement to a review.
Checks whether requirement that triggered pre-set automation has children
If it has children it posts a discussion in each child requirement indicating that the parent has been updated.
The discussion includes the identity of the user who edited the requirement, thus triggering the automation.
New Task (Automation)
When creating new components a prospect required that tasks would automatically be created and assigned to a specific user. In this example a preset automation triggered by the creation of a new component will create a new task and assign a user to it. It was idealized to add the triggering object as an input but that development wasn’t finalized.
kwargs['triggered_objects'] to extract the information of the object which triggered the automation and add it to the task’s input field.
Posts a new task assigned to a specified user.
A simple example of how a more customized counter can be created with more statistical values than the ones supplied in the default blocks available in Dashboards and Analysis documents. It can be run manually or set to run from an automation every time a requirement is created, modified or deleted.
Take the extracted information on requirements and derive more complex statistics from it by comparing it with previously updated values. Each value can indicate how much it increased/decreased expressed in percentage.
Compare deployment statistics by individually running the script in each project and adding a special instance which draws statistics from every project, displaying them in a dashboard.
Set a customized warning for project administrators if the statistics show a sudden drop in number of requirements, signaling a possible drastic change to the project.
Builds overall requirement stats for a single project on the deployment.
Patches pre-created Valis with results.
An example script which demonstrates the power of the Python API to create fully automated reports. Although the main script is also available in the integrations documentations page this particular version has been adapted to run from the deployment’s scripting module.
It can be triggered by an automation or manually.
Create a full report by adapting the current requirements fetching process to extract components and other objects to be filled out in the report
Add other customizable fields, which can be fetched from a custom dashboard text blocks, from which it can also be triggered.
Change the final output file to a PDF instead of an editable Word file.
Takes a Word file as a Template, from a given deployment, and returns, as an output, generated files, as many as there are specifications, placed in the deployment's Files Management (see Specification export based on Microsoft Word Template for more details).
Dashboard counters have not yet caught up with the Testing module, and there are not automated counters for tests yet. This script was developed as a proof of concept for a custom counter has not yet been implemented. It was initially conceived to run from a manual triggering from a Run Script button in a Dashboard.
Expand the Test statistics which are posted back to Valispace.
If a test run is successful, post the result to a connected task and change its state to “Done”.
Returns a string, placed on predefined Dashboard text blocks, with calculated test run statistics.
It can take input from an automation or it can be triggered manually.