Definitions of key terms and concepts for the Baseline Protocol
The Baseline Protocol is a set of methods that enable two or more state machines to achieve and maintain data consistency and workflow continuity by using a network as a common frame of reference.
A mechanism for one Workflow to use the proof generated by a different Workflow, to use a proof generated in a Workflow executed by Workgroup A, as input to a Workflow executed by Workgroup B.
An interface for connecting, integrating, and synchronizing a Baseline stack and system of record.
Given a network or system of n components, t of which are dishonest, and assuming only point-to-point channels between all the components - then whenever a component A tries to broadcast a value x such as a block of transactions, the other components are permitted to discuss with each other and verify the consistency of A's broadcast, and eventually settle on a common value y. The system is then considered to resist Byzantine faults if a component A can broadcast a value x, and then:
- If A is honest, then all honest components agree on the value x.
- If A is dishonest, all honest components agree on the common value y.
"The Byzantine Generals Problem", Leslie Lamport, Robert E. Shostak, Marshall Pease, ACM Transactions on Programming Languages and Systems, 1982
The ability of a Party to immediately cease all their active Workflows across all of their Workgroups within a Baseline-compliant implementation, and, if required, exit a Baseline-compliant implementation with all of their data, without any 3rd party being able to prevent the exit.
A Common Frame of Reference as used in this document refers to achieving and maintaining data consistency between two or more Systems of Record using a consensus-controlled state machine. This enables workflow and data continuity and integrity between two or more counterparties.
A Consensus Controlled State Machine (CCSM) is a network of replicated, shared, and synchronized digital data spread across multiple sites connected through a peer-to-peer network and utilizing a consensus algorithm. There is no central administrator or centralized data storage.
Information captured through electronic means, which may or may not have a paper record to back it up.
Bulletin of the American Society for Information Science and Technology, Electron-ic Records Research Working Meeting: A Report from the Archives Community, May 28‐30, 1997.
The condition of being the same with something described or asserted, per Merriam-Webster Dictionary.
A concretization of the above used in this document: Identity is the combination of one or more unique identifiers with data associated with this/these identifier(s). Identity-associated data consists of signed certificates or credentials such as Verifiable Credentials and other unsigned, non-verifiable data objects generated by or on behalf of the unique identifier(s).
The ability of a Party operating Workflows on a Baseline-compliant implementation A to instantiate and operate one or more Workflows with one or more Party on a baseline-compliant implementation B, without the Party on either implementation A or B having to know anything of the other Party’s implementation.
In concurrent computing, liveness refers to a set of properties of concurrent systems that requires a system to make progress, despite its concurrently executing components ("processes") may have to "take turns" in critical parts of the program that cannot be simultaneously run by multiple processes. Liveness guarantees are important properties in operating systems and distributed systems.
Alpern B, Schneider FB (1985) Defining liveness. Inf Proc Lett 21:181-185
A legal contract that defines the general terms and conditions governing the entire scope of products commercially exchanged between the parties to the agreement.
Refers to a situation where a statement's author cannot successfully dispute its authorship or the validity of an associated contract. The term is often seen in a legal setting when the authenticity of a signature is being challenged. In such an instance, the authenticity is being "repudiated".
A party participates in the execution of one or more given Workflows. A Workgroup is set up and managed by one Party that invites other Parties to join as workgroup members.
The ability of a Party to migrate and re-Baseline its existing Workflows and data from one Baseline-compliant implementation to another Baseline-compliant implementation, without any 3rd party being able to prevent the migration.
A method of ensuring the privacy of Workflow data represented on a public Mainnet - permissionless vs public - to discuss and review.
A Proof of Correctness is a mathematical proof that a computer program, or a part thereof, will yield correct results, i.e. results fulfilling specific requirements, when executed. Before proving a program correct, the theorem to be proved must be formulated. The hypothesis of such a correctness theorem is typically a condition called a pre-condition, that the relevant program variables must satisfy immediately before the program is executed. The thesis of the correctness theorem is typically a condition called a post-condition, that the relevant program variables must satisfy immediately after the execution of the program. The thesis of a correctness theorem may be a statement that the final values of the program variables are a particular function of their initial values.
"Encyclopedia of Software Engineering", Print ISBN: 9780471377375| Online ISBN: 9780471028956| DOI: 10.1002/0471028959, (2002), John Wiley & Sons, Inc.
The integrity of the data in data architecture is established by what can be called the “system of record.” The system of record is the one place where the value of data is definitively established. Note that the system of record applies only to detailed granular data, the system of record does not apply to summarized or derived data.
W.H. Inmon, Daniel Linstedt and Mary Levins, "Data Architecture", 2019, Academic Press, ISBN: 978-0-12-816916-2
Collection of entities and processes that Service Providers rely on to help preserve security, safety, and privacy of data and which is predicated on the use of a CCSM implementation.
Marsh S. (1994). "Formalizing Trust as a Computational Concept". Ph.D. thesis, University of Stirling, Department of Computer Science and Mathematics.
Verifiable computing, that can be described as verifiably secure, enables a computer to offload the computation of some function to other, perhaps untrusted clients, while maintaining verifiable, secure results. The other clients evaluate the function and return the result with proof that the computation of the function was carried out correctly. The proof is not absolute but is dependent on the validity of the security assumptions used in the proof. For example, a blockchain consensus algorithm where the proof of computation is the nonce of a block. Someone inspecting the block can assume with virtual certainty that the results are correct because the number of computational nodes that agreed on the outcome of the same computation is defined as sufficient for the consensus outcome to be secure in the consensus algorithm’s mathematical proof of security.
Gennaro, Rosario; Gentry, Craig; Parno, Bryan (31 August 2010). Non-Interactive Verifiable Computing: Outsourcing Computation to Untrusted Workers. CRYPTO 2010. doi:10.1007/978-3-642-14623-7_25
A process made up of a series of Worksteps between all or a subset of Parties in a given Workgroup.
A workgroup is a set of Parties, also referred to as BPI Subjects, who are the authorized users of a BPI. The Parties use Workflows to synchronize their systems of record through one or more worksteps in the workflow.
A workstep is characterized by input, the deterministic application of a set of logic rules and data to that input, and the generation of a verifiably deterministic and verifiably correct output.