Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
The Baseline Protocol is an open-source initiative that combines advances in cryptography, messaging, and consensus-controlled state machines -- often referred to as blockchains or distributed ledger technology (DLT) -- to deliver secure and private business processes, event ordering, data consistency, and workflow integrity at low cost. The Baseline Protocol provides a framework that allows Baseline Protocol Implementations (BPIs) to establish a common frame of reference, enabling confidential and complex (business) collaborations between enterprises without moving any sensitive data out of traditional Systems of Record. The work is governed as an EEA Community Project, managed by OASIS.
Businesses spend hundreds of millions of dollars on ERP, CRM and other internal systems of record. Failure to properly synchronize these systems between organizations causes considerable disruption and waste: disputes, lost inventory, inflated capital costs, regulatory actions, and other value leakage. To avoid these problems, systems require a common frame of reference. But only the largest high-volume partnerships can afford the capital expense involved in setting up such integrations. The baseline approach requires a common frame of reference that is always on, strongly tamper resistant and able to prevent any individual or group from taking over the system and locking companies out of valid operations. These requirements strongly suggest the use of a public blockchain or Layer-2 network anchored to a public blockchain.
Past approaches to blockchain technology have had difficulty meeting the highest standards of privacy, security and performance required by corporate IT departments. Overcoming these issues is the goal of the Baseline Protocol.
An illustrative example of the use of a Baseline Protocol Implementation (BPI) is a Buyer placing an order to a Seller. Normally a Buyer system creates an Order and transmits it to the Seller system through some preestablished messaging system without providing proof that the Order created is correct or even that both parties processed and stored the message consistently. This forces the Seller and Buyer systems to validate the order, often manually. This then leads to a time-consuming and often expensive back and forth between Seller and Buyer to rectify inconsistencies.
A Master Services Agreement (MSA) between a Requester (Buyer) and a Provider (Seller) is implemented on a BPI and contains billing terms, pricing, discounts, and Seller information such as billing address, etc. Once established and agreed upon by Buyer and Seller, the BPI provides state synchronization between Buyer and Seller, since the ERP systems for Buyer and Seller can now refer to mutually agreed-upon data as a common frame of reference.
Based on this mutually agreed-upon state in the MSA, the Buyer creates an Order based on the MSA, employing a cryptographic proof that confirms not only the correct application of business logic but also the correct application of commercial data in the Order creation. This proof is submitted together with the Order through the BPI and then is validated by the Seller. If the proof is validated, the Seller accepts the proposed state change by generating its cryptographic proof, confirming its acceptance of the state change. The Seller then updates the state of the business workflow in the BPI and sends the new proof to the Buyer.
The figure below visually demonstrates high-level Buyer and Seller Order generation and acceptance assuming that an MSA between Buyer and Seller already exists and is recorded on a BPI and that the commercial state has been synchronized up to this workstep in the commercial business workflow.
Figure 1: Illustrative Example of how the commercial state between Buyer and Seller is synchronized and an Order created.
Without a BPI, both Buyer and Seller must assume that the MSA between them and all its values are correctly represented in the other party’s respective Systems-of-Record. Hence, if an order is created based upon the MSA but does not comply with the MSA, it will likely result in extensive manual interactions between Seller and Buyer at one stage or another to resolve the problem to their mutual satisfaction.
If you are unsure about any specific terms feel free to check the Glossary.
The Baseline Protocol initiative was announced on March 4, 2020 and launched as an OASIS open source project on March 19, 2020, supported by fourteen founding companies. The number of active companies and individuals contributing to the work and using it in products and enterprise solutions grew quickly through the Summer of 2020 and currently is estimated at over 600.
The initiative is strongly aligned with the Mainnet Working Group, a joint effort of the Enterprise Ethereum Alliance and the Ethereum Foundation.
This work is active and open to contributors.
All work in the Baseline Protocol public repo is released under the CC0 1.0 Universal public domain dedication. For the full license text, refer to license.
In an openly governed open standards / open source initiative, leadership is organic. One need not be seated to a committee to lead. One need not be the chair to lead the community in a direction. (Indeed, the chair's primary job is to harmonize the interests of others and help the community move to a shared vision, not necessarily to forward their own point of view.)
The way to lead is to start something, help something, fix something...even spellcheck something! The way to lead is to get others to amplify what you are doing (best done by listening deeply to others first). The way to lead is to serve your own (and your company's) enlightened self interest. You should be able to draw a straight line from your time on this work to real impact for your own offerings.
Below are the things you need to know to get informed, get involved and get value out of the work.
Anyone can join the Baseline Protocol communication channels (see below), and anyone with a Github ID can view the roadmap (don't forget to log in with github id), fork the repo, and submit pull requests to contribute to the work.
You can also become a regular Member of the initiative, which will allow you to manage Issues and push directly to most repo branches (other than Master). Members who step up to be accountable for projects can become General Assembly members. Members who take on responsibility for maintaining the integrity of the work and merging contributions to the master branch of the repo can become Core Developers. And finally, contributors can nominate and vote-in members to be on the governing Technical Steering Committee (TSC) annually.
Below are the things to know about how to get involved and work with the team.
There are regular meetings of the TSC, General Assembly, Core Developers and other groups. These are typically listed here. If the times we have don't work for your timezone, we can do 1-1's or make changes to the schedule.
It's critical that new contributors have a good idea about what the focal points of the community are and where one can make a real impact. It's hard to beat having a real conversation about how to get started in an intimate setting where you can ask questions and get immediate answers.
We will hold Onboarding office hours once a week. Watch the Calendar for details or inquire on one of the communication channels below. Sessions and other learning material will be posted on our YouTube Channel and on Medium.
The TSC meets typically once a month to review progress. Members of the TSC receive invitations and you can rsvp to join any meeting by sending a message to the TSC Chair.
The General Assembly meets typically once a month to review roadmaps and set high-level priorities. Members of the General Assembly receive invitations, and you can join any meeting by sending a message to the TSC Chair.
The Baseline Protocol initiative maintains a Slack channel that is moderated but public. Sign up here. It's an active group, and you can directly connect with folks doing the work and coordinate with each other to get the work done.
Thanks to an enterprising member of the community, we now also have a shared Matterbridge-enabled channel between Slack, Telegram, and Discord. You can post -- and read what anyone posts -- on the shared channel, regardless of which platform you are using. In Slack, use the #community-chat channel to broadcast to Slack, Discord and Telegram. In Discord, use the #general channel. In Telegram, just use the main /baselineprotocol thread.
You can sign up to the baseline protocol members list and get access to the Directory. This will show you any members who have elected to be displayed publicly. There are others who will choose to be hidden but will receive group emails.
While most communication seems to go through the Slack/Discord/Telegram channel, we do have email. When you sign up in the members directory, you will have the option to get email that's sent to the mailing list or directly from anyone in the group. You can control how that impacts your inbox here.
The Baseline Protocol initiative is launching soon a Discourse forum. Reddit is also likely. We also actively use commenting on Epics and Issues to conduct threaded discussions on key projects and engineering topics. To View the Zenhub Board for these, sign into Github with your ID, or you can install the Zenhub plugin to your Chrome browser and sign in that way. If you are member of the Ethereum-Oasis Org.
The Baseline Protocol initiative maintains the @baselineproto Twitter account, to which members of the TSC, General Assembly and some maintainers can post. We also use the #baselineprotocol tag.
The Baseline Protocol initiative uses Medium to post blogs. Here is the publication. Reach out on Slack to the TSC Chair, OASIS team or members of the steering committees, if you want to be a writer or editor.
The Baseline Protocol initiative maintains a YouTube Channel. If you have videos that you'd like to add to the Channel or would like to help on Baseline Protocol video assets, use the Slack #comms-and-marketing channel and raise your hand.
There is no Baseline Protocol token.
The Baseline Protocol is an approach, a set of techniques which requirements are specified in the Baseline Protocol Standard, an Oasis open standard and which technology is available open-source. The Baseline Protocol standard and source code are available under the CC0 1.0 Universal public domain dedication. For the full license text, refer to license.
Being open-source, open-standard means that anyone is free to build any application or service implementing the protocol. Those applications/services may or may not have their own tokens.
a contributor: Anyone with a Github ID can submit pull requests, create and edit their own Issues. You don't need any special access to the repo to get involved and start contributing. For more information, see here.
a member: Contributors can become members of our Github Organization, which allow them to get invitations to key meetings, be assigned to Issues, and vote for Technical Steering Committee members. For more information, see here.
a core developer : Members can become core developers and have a direct hand in deciding what work is merged to the Main/Master Branch to become official Baseline Protocol technology. For more information, see here.
Baseline Protocol v1.0 core: If you want to build with the Baseline Protocol from scratch, you can get started with v1.0 core that provides a set of 'vanilla' packages. You can get started here.
Reference Implementations: You can also choose to build on top of existing reference implementations. We recommend starting with BRI-1. This reference implementation of the core interfaces specified in the v1.0 release has been developed by individuals and companies community leaders including Provide, EY, Nethermind, ConsenSys, and others. It heavily utilizes the core Provide application stack and is compatible with Shuttle, an on-ramp for baselining. NATS and the Nethermind Ethereum client (the first client to implement the Baseline Protocol RPC) are opinionatedly used by default. You can get started with BRI-1 here.
Developer Resources: To help you build with the Baseline Protocol, you can use the implementation guide and other developer resources available here.
The Baseline Protocol enables confidential and complex (business) collaborations between enterprises without moving any sensitive data between traditional systems of record.
While 'baselining' as a technique is not restricted to commercial use cases, a Baseline Protocol Implementation (BPI) as specified in the Baseline Protocol Standard requires at least two organizations to come together to synchronize their respective systems of record.
A group of companies interested in 'baselining' can either deploy their own implementation using their in-house resources or they can choose to work with third-party partners (product and service providers).
A list of Baseline-compliant providers will be made available in 2022.
Any systems of record can be baselined without requiring any modification to legacy systems. A Baseline Protocol stack is required to manage all messaging and transactions between counterparties and between counterparties and their agreed common frame of reference.
Distributed Ledger Technology referred to as Consensus Controlled State Machine (CCSM) is the foundational enabler of a Baseline Protocol Implementation (BPI). A compliant BPI requires conformance to the CCSM specification of the Baseline Protocol Standard. For more information, see here.
While much of the initial work on the standard and the code was done by companies and individuals in the Ethereum development community, any CCMS that conforms to the CCSM specification of the Baseline Protocol Standard can be used in a compliant Baseline Protocol Implementation (BPI).
The Baseline Protocol is a set of methods that enable two or more state machines to achieve and maintain data consistency, and workflow continuity by using a network as a common frame of reference.
A mechanism for one Workflow to use the proof generated by a different Workflow. to use a proof generated in a Workflow executed by Workgroup A as input to a Workflow executed by Workgroup B.
An interface connecting and synchronizing a baseline stack and system of record.
Given a network or system of n components, t of which are dishonest, and assuming only point-to-point channels between all the components, then whenever a component A tries to broadcast a value x such as a block of transactions, the other components are permitted to discuss with each other and verify the consistency of A's broadcast, and eventually settle on a common value y. The system is then considered to resist Byzantine faults if a component A can broadcast a value x, and then:
If A is honest, then all honest components agree on the value x.
If A is dishonest, all honest components agree on the common value y.
"The Byzantine Generals Problem", Leslie Lamport, Robert E. Shostak, Marshall Pease, ACM Transactions on Programming Languages and Systems, 1982
The ability of a Party to immediately cease all their active Workflows across all of their Workgroups within a Baseline-compliant implementation, and, if required, exit a Baseline-compliant implementation with all their data without any 3rd party being able to prevent the exit.
A Common Frame of Reference as used in this document refers to achieving and maintaining data consistency between two or more Systems of Record using a consensus-controlled state machine. This enables workflow and data continuity and integrity between two or more counterparties.
A Consensus Controlled State Machine (CCSM) is a network of replicated, shared, and synchronized digital data spread across multiple sites connected by a peer-to-peer and utilizing a consensus algorithm. There is no central administrator or centralized data storage.
Information captured through electronic means, and which may or may not have a paper record to back it up.
Bulletin of the American Society for Information Science and Technology, Electron-ic Records Research Working Meeting: A Report from the Archives Community, May 28‐30, 1997.
The condition of being the same with something described or asserted, per Merriam-Webster Dictionary.
A concretization of the above used in this document: Identity is the combination of one or more unique identifiers with data associated with this/these identifier(s). Identity-associated data consists of signed certificates or credentials such as verifiable credentials and other unsigned, non-verifiable data objects generated by or on behalf of the unique identifier(s).
The ability of a Party operating Workflows on a baseline-compliant implementation A to instantiate and operate one or more Workflows with one or more Party on a baseline-compliant implementation B without the Party on either implementation A or B having to know anything of the other Party’s implementation.
In concurrent computing, liveness refers to a set of properties of concurrent systems, that require a system to make progress, despite its concurrently executing components ("process-es") may have to "take turns" in critical sections, parts of the program that cannot be simultaneously run by multiple processes. Liveness guarantees are important properties in operating systems and distributed systems.
Alpern B, Schneider FB (1985) Defining liveness. Inf Proc Lett 21:181-185
A legal contract that defines the general terms and conditions governing the entire scope of products commercially exchanged between the parties to the agreement.
Refers to a situation where a statement's author cannot successfully dispute its authorship or the validity of an associated contract. The term is often seen in a legal setting when the authenticity of a signature is being challenged. In such an instance, the authenticity is being "repudiated".
A set of Parties participating in the execution of one or more given Workflows. A Workgroup is set up and managed by one Party that invites other Parties to join as workgroup members.
The ability of a Party to migrate and re-baseline its existing Workflows and data from one baseline-compliant implementation to another baseline-compliant implementation without any 3rd party being able to prevent the migration.
A way of ensuring the privacy of Workflow data represented on a public Mainnet. - permissionless vs public - to discuss and review.
A Proof of Correctness is a mathematical proof that a computer program or a part thereof will when executed, yield correct results, i.e. results fulfilling specific requirements. Before proving a program correct, the theorem to be proved must, of course, be formulated. The hypothesis of such a correctness theorem is typically a condition that the relevant program variables must satisfy immediately before the program is executed. This condition is called the precondition. The thesis of the correctness theorem is typically a condition that the relevant program variables must satisfy immediately after the execution of the program. This latter condition is called the post-condition. The thesis of a correctness theorem may be a statement that the final values of the program variables are a particular function of their initial values.
"Encyclopedia of Software Engineering", Print ISBN: 9780471377375| Online ISBN: 9780471028956| DOI: 10.1002/0471028959, (2002), John Wiley & Sons, Inc.
The integrity of the data in data architecture is established by what can be called the “system of record.” The system of record is the one place where the value of data is definitively established. Note that the system of record applies only to detailed granular data. The system of record does not apply to summarized or derived data.
W.H. Inmon, Daniel Linstedt and Mary Levins, "Data Architecture", 2019, Academic Press, ISBN: 978-0-12-816916-2
Collection of entities and processes that Service Providers rely on to help preserve security, safety, and privacy of data and which is predicated on the use of a CCSM implementation.
Marsh S. (1994). "Formalizing Trust as a Computational Concept". Ph.D. thesis, University of Stirling, Department of Computer Science and Mathematics.
Verifiable computing that can be described as verifiably secure enables a computer to offload the computation of some function to other perhaps untrusted clients, while maintaining verifiable, and thus secure results. The other clients evaluate the function and return the result with proof that the computation of the function was carried out correctly. The proof is not absolute but is dependent on the validity of the security assumptions used in the proof. For example, a blockchain consensus algorithm where the proof of computation is the nonce of a block. Someone inspecting the block can assume with virtual certainty that the results are correct because the number of computational nodes that agreed on the outcome of the same computation is defined as sufficient for the consensus outcome to be secure in the consensus algorithm’s mathematical proof of security.
Gennaro, Rosario; Gentry, Craig; Parno, Bryan (31 August 2010). Non-Interactive Verifiable Computing: Outsourcing Computation to Untrusted Workers. CRYPTO 2010. doi:10.1007/978-3-642-14623-7_25
A process made up of a series of Worksteps between all or a subset of Parties in a given Workgroup.
A workstep is characterized by input, the deterministic application of a set of logic rules and data to that input, and the generation of a verifiably deterministic and verifiably correct output.
The Baseline Protocol provides a framework that allows Baseline Protocol Implementations (BPIs) to establish a common (business) frame of reference enabling confidential and complex (business) collaborations between enterprises without moving any sensitive data between traditional Systems of Record.
Presented below, a reference architecture that when implemented ensures that two or more systems of record can synchronize their system state over a permissionless public Consensus Controlled State Machine (CCSM) network.
A Baseline Protocol Stack Reference Architecture as depicted above in Figure 1 is comprised of the following layers:
Baseline Protocol (BPI) Abstraction Layer: This layer enables accessing all externally available BPI functions through APIs as defined in the Baseline Protocol API Standards document
Middleware Layer: This layer manages all counterparties to an agreement and its associated workflows and worksteps with business rules and business data as well as all counterparty delegates. In addition, it manages all messaging between counterparties to an agreement and instantiation of processing layers based on newly created or updated agreements and their workflows, worksteps, business rules, and business data.
Processing Layer: Manages, properly sequences, and deterministically processes and finalizes in a privacy-preserving, cryptographically verifiable manner all state change requests from counterparties to all agreements represented in the BPI.
CCSM Abstraction Layer: This layer enables accessing all required BPI functions implemented on one or more CCSMs through APIs as defined in the Baseline Protocol API Standards document.
CCSM Layer: This layer manages, properly sequences, and deterministically processes in a privacy-preserving, cryptographically verifiable manner all transactions from the Processing Layer as well as either deterministically or probabilistically finalizes on the CCSM all CCSM state transitions based on said transactions.
BPI Abstraction layer
API Gateway: An API gateway that exposes all required functionality to the counterparties to an agreement and enforces all necessary authentication and authorization of API calls as well as properly directs the API calls within the Baseline Protocol Stack
Application: The application logic which manages the pre-processing and routing of all API requests, as well as the enforcement of authentication and authorization protocols and rules.
Middleware Layer
Workflows: A Business Process Management engine that allows for the definition, management, and instantiation of workflows and worksteps and associated business rules and data based on (commercial) agreements between counterparties
Identity/Accounts/Workgroups: A capability that allows for the identification and management of counterparties and their delegates as well as members of workflows and worksteps organized in workgroups that are derived from the counterparties to an agreement.
Messaging: A messaging capability that allows the exchange of secure and privacy-preserving messages between counterparties to an agreement to communicate and coordinate an agreement on proposed (commercial) state changes.
Processing Layer
Transaction Pool: one or more transaction pools that hold, properly sequence, preprocess and batch for processing by the Virtual State Machine all requested state change transactions of a BPI.
Virtual State Machine: one or more Virtual State Machines which deterministically processes and finalizes in a privacy-preserving, cryptographically verifiable manner all state change request transactions.
Storage: A storage system for the cryptographically linked current and historical state of all (commercial) agreements in a BPI.
CCSM Abstraction Layer
API Gateway: An API gateway that enables accessing all required BPI functions implemented on one or more CCSMs, and properly directs the requests within the CCSM Abstraction layer to the proper CCSM API application logic
Application: The CCSM API application logic manages the pre-processing, as well as the proper usage of the underlying CCSM and BPI authentication and authorization.
CCSM Layer is comprised of
Messaging: A messaging capability that allows the exchange of messages between CCSM nodes that comprise either received transactions or a new proposed CCSM state.
Transaction Pool: A transaction pool holds, properly sequences, pre-processes, and batches for processing by the CCSM Virtual State Machine all submitted CCSM transactions.
Virtual State Machine: A Virtual State Machine deterministically processes in a cryptographically verifiable manner all submitted transactions for CCSM state changes.
Storage: A storage system for the cryptographically linked current and historical state of all CCSM State Objects.
If you are unsure about any specific terms feel free to check the Glossary.
The Baseline Protocol initiative was on March 4, 2020 and launched as an open source project on March 19, 2020, supported by fourteen founding companies. More companies joined the effort shortly thereafter and continue to do so. In 2021, the Enterprise Ethereum Alliance and OASIS collaborated to establish the Baseline Protocol and other projects as .
The work of the community is maintained under a public domain.
The Baseline Protocol is a set of techniques that must be implemented in a standard way across different systems. The draft was completed in September, 2021 and submitted to OASIS for review. The current specifications are maintained .
There are lots of opportunities to get informed, get involved, and get value out of developing reusable components, and ultimately deploying the Baseline Protocol in your own offerings. Go to and click "Join the Team".
New Contributors to the codebase and standard: Go for contribution guidelines.
The Baseline Protocol is the emerging standard for synchronizing state across different systems of record over the internet, using a public blockchain as a common frame of reference. This applies to traditional corporate systems of record, any kind of database or , and even different blockchains or DLTs. It is particularly promising as a way to reduce capital expense and other overheads while increasing operational integrity when automating business processes across multiple companies.
The approach is designed to appeal to security and performance-minded technology officers.
You can find all the details on the current version of the Baseline Protocol .
Version 1.0 of the Baseline Protocol has been released. It is composed of a set of 6 core packages that are available open-source, under the CC0 1.0 Universal public domain dedication. For the full license text, refer to .
You can find more about the source code .
The Baseline Protocol Standard will be a set of three specifications - CORE, API and CCSM that together, provide the requirements to be satisfied to implement a compliant Baseline Protocol Implementation (BPI).
Today, there are demos, prototypes, and production systems being developed in more places than can be tracked, and some have been submitted as public domain contributions to the community.
It is developed and will be ratified as an Oasis open standard, available under the CC0 1.0 Universal public domain dedication. For the full license text, refer to .
You can find more details on the Baseline Protocol Standard .
A growing number of and demos to help you understand baselining and give you ideas for your own projects can be found .
The first complete reference implementation, has been developed by individuals and community leaders including Provide, EY, Nethermind, ConsenSys, and others.
The Baseline Protocol standard does not stipulate the use of any particular state machine as the common frame of reference, where baseline proofs are deposited and managed, in a baselined workgroup. However, the first public Layer-2 implementation of this service is called Baseledger, details about which may be found at .
All work of the Baseline Protocol initiative is maintained publicly on a github repo.
You don't need any special access to the repo to get involved and start contributing. Follow these steps to fork the repo and submit pull requests. Anyone with a Github ID can also create and edit their own Issues, participate in public meetings, and join the various communication and collaboration channels that the community maintains.
There are four ways to contribute:
Write code (Architecture, Spikes, Issues, Tasks)
Write specifications (Epics, Stories, Prioritizations, Use Cases)
Write content and communicate it to more potential contributors, developers and product owners, and other stakeholders -- Join the communications team on Slack
Help prioritize work and develop incentives to get it done by joining the General Assembly or becoming a Core Developer or TSC Member.
There is one other way to contribute, and it's the most important: use the work in the Baseline Protocol to improve your own offerings. The Baseline Protocol is not a product or platform..the product is YOUR product.
Here is the link to the Baseline Protocol code of conduct:
Technical contributors either are working on architecture or developing code...but even correcting the language of documentation counts as a technical contribution and qualifies you to vote in upcoming TSC elections.
As of October 1, 2021, the baseline community will organize work in a similar fashion to Ethereum's EIPs (Ethereum Improvement Proposal). These will be called BaseLine Improvement Proposals, or BLIPs for short, and they will be maintained here. YOU ARE ENCOURAGED TO SUBMIT IDEAS to the BLIP repo, and we are well-organized to review and act on them in a timely fashion. You never know -- we might decide to raise a grant to pay for your idea to get executed. It's happened before. Try it out!
Technical tasks are written as Github Issues. Issues will be reviewed, lightly prioritized, and communicated as "hey, help out here!" messaging to the developer community every two weeks. The TSC will periodically review what Issues and communications best succeeded in attracting help.
An Issue should be constructed, in particular, with acceptance tests. All other elements of a good Issue should be known to any practicing developer.
Most Issues should be attached to an Epic (see below).
A good Task/Issue starts with a Verb: "Implement xyz."
Follow these steps when submitting a pull request:
Fork the repo into your GitHub account. Read more about forking a repo on Github here.
Create a new branch, based on the master
branch, with a name that concisely describes what you’re working on (ex. add-mysql
).
Ensure that your changes do not cause any existing tests to fail.
Submit a pull request against the master
branch.
Good practice strongly favors committing work frequently and not loading up a long period of work in isolation. Be brave...let others see what you are working on, even if it isn't "ready."
Anyone can do a pull request and commit work to the community. In order for your work to be merged, you will need to sign the eCLA (entity contributor agreement) or iCLA (individual contributor agreement). Here are the details: https://www.oasis-open.org/resources/projects/cla/projects-entity-cla
The iCLA happens automatically when people submit a pull request, or they can access directly by going to https://cla-assistant.io/ethereum-oasis/baseline
Merging to Master requires review by THREE Core Developers. The TSC seeded the initial set of Core Developers. Now, any active Member can become a Core Developer. Core Developers may add more Core Developers by rough consensus, and the TSC may step in to resolve cases where this process fails.
The specifications work of the community can be done by anyone, both technical and non-technical contributors. The focus is on finding evidence for a requirement and articulating it in the form below. The General Assembly is the coordinating body for this work.
The Baseline Protocol initiative uses Zenhub to create and manage both specification work and active protocol requirements and prioritization. (Zenhub should be a tab in your Github interface if you are using the Chrome extension. There is also a web-app here.)
Zenhub enables Epics to nest, while Issues don't nest...not really. Therefore, the community will employ the practice of using Issues for engineering Tasks and Epics to contain high level topics, which may have nested within them a set of agile Epics, and in them a set of Stories, and even Stories may have other Stories nested in them. Engineering meets planning where a Story (in the form of a Zenhub Epic) is referenced by an Issue/Task. (This can work very well, but Zenhub's choice in calling Epics, Epics can cause confusion.
A Zenhub "Epic" used as a high-level container for a grouping of work should be in short topic form -- primarily nouns.
A Zenhub "Epic" used as a Story should almost always follow the form: "As X, I need Y so that I can Z." An acceptable variant is the "now I can" form (note the "so that" clause is preserved):
A Party's System Administrator can look up Counterparties in an OrgRegistry (a public phone book) and add them to a Workgroup, so that they can start Baselining Records and Workflows.
A Party's System Administrator can quickly and easily verify a Counterparty's identity found in the OrgRegistry, so that they can be confident in adding the Counterparty to a Workgroup.
A Party's System Administrator can use some or all of the Counterparties and Workflow Steps defined in one Workgroup in Workflow Steps created within another Workgroup, so that Workgroups don't become yet another kind of silo.
The active contributors and maintainers of the Baseline Protocol repo can be found on Github. (Note: many contributors work in clones extending the protocol for their products. These people don't necessarily show up in the Github contributors list.)
The General Assembly is a regular meeting of community members.
Meeting signups are maintained on the "Join the Community" page on https://baseline-protocol.org.
Identify high-level baseline protocol project categories
Articulate specific projects within those categories for the community to execute
Rank projects by loose consensus so that contributors can spot attractive opportunities to work on
The group meets monthly (or biweekly, as needed) and is convened by the Chair. From an governance perspective, the General Assembly is considered a subcomittee of the TSC, but in practice, the General Assembly and TSC have coequal roles in the community.
Baseline Community Projects are the General Assembly's way of managing specific objectives for the baseline protocol initiative. General Assembly members commit themselves to one or more of these.
To be an General Assembly member means you are standing up -- alone or with a group of members -- to be accountable for one or more projects of the Baseline Protocol Initiative.
While others may do the development and task-work, a General Assembly member commits to articulating and prioritizing the work, identifying community members and others who can do the work, and using whatever incentive structure is available (or whatever influence one has) to help ensure the work gets done.
General Assembly Members also keep the community of project stakeholders regularly informed of status. This includes cases where milestones aren't being met or when a project should be shut down and the learning recycled into another project.
Here's a snapshot of the project roadmap in the baseline protocol's dashboard:
Projects can be technical, strategic, or organizational. For example, a technical project would be finding a messaging system that suits the baseline specifications. An organizational project would be developing a powerful and flexible incentive structure for the baseline community. Both are managed and tracked in Zenhub as shown above.
General Assembly Members have Github write permission, which gives them the ability to add and edit Projects, Epics and Issues, and to assign (and be assigned) work to specific Contributors.
Unlike the TSC, which is fixed at eleven elected members, the size of the General Assembly is flexible. To balance openness and inclusiveness with the need to keep the team manageable and accountable, there is a single rule that determines membership: Accountability for Active Projects.
Your name, email, company/organization, github ID
A one or two-sentence description of the area around which you intend to help provide leadership
You will receive back an invitation to either a special session of the General Assembly or the next general General Assembly meeting, and you may at that time state your intention to join the team. If no one asks for a further review by the TSC, you are approved as a new member of the General Assembly and can begin working on your Project(s).
Within 24 hours of approval as a new General Assembly Member, you will be sent an email with instructions on how to get permissions to the various tools you now have at your disposal, including:
The ability to create and edit projects, epics and issues on Zenhub (Write access to the Github repo.)
The ability to post messages to community members
Invitations to the General Assembly meetings
To prepare for a great start to your time on the General Assembly, review the existing projects and top-level epics. Then get an idea where you want to focus your attention, and if it isn't represented in the dashboard, consider adding a Project or talk with other General Assembly members about it.
Just as getting into the General Assembly is about stepping up to lead on a project, leaving the General Assembly would be a natural process of ending those projects or cycling off leading them. The General Assembly will periodically do a house-cleaning segment in the regular meeting to achieve loose consensus on whether any projects require pruning.
General Assembly Meetings have these standard agenda items:
Review of new Projects
Triage of Projects that seem to need help
Ranking of "top featured" Projects that will be promoted by community influencers
Celebrate successful Projects and clean up list
Anyone with a github ID can be a to the Baseline Protocol, but you can also become a Member of our Github Organization, which will allow you to get invitations to key meetings, be assigned to Issues, and vote for Technical Steering Committee members (provided you make at least one contribution that is successfully merged to master within the ).
Being a Member gives you access to the Github Repo as well as the .
Members can manage Issues in pipelines, assign others to Issues, create Epics and Milestones and push contributions to any unprotected branch other than Master/Main.
It's a good idea to become a member if you are making regular contributions and want to be assigned Issues, be responsible for assigning Issues to others, or both. Members can be technical contributors, contributors to specifications, or people stepping up to be accountable for projects.
Joining the Baseline Protocol as a Member is easy.
Technical contributors should contribute at least one . Then, use the #github-membership-requests to post your github ID, name and company (optional) and a coordinator will ensure that you are added as a member within 24 hours or less. If you do not receive a response in that time, use one of our to contact the and/or any member of the to expedite.
To be a Member, you must of course sign the . This is essential, because you have Write access to the repo, and OASIS governance requires that content be contributed under those agreements.
Non-technical contributors, and in particular those who wish to be on the General Assembly, do not need to submit a pull request. See for details.
Trust is essential for members, because any member has the ability to make significant and direct changes to anything other than the Master branch or otherwise protected branches.
Members should:
show commitment by stepping up to contribute to key projects
be reliable in completing issues to which they have been assigned
attend regular member meetings when possible
follow the project style and testing guidelines
show an understanding of the nature and focus of the Baseline Protocol
be welcoming to others in the community
follow branch, PR, and code/docs style conventions
Once you are a member, you can:
Or, just write awesome code, specifications, docs and communications.
Once you've done some work as a Member, you may wish to become a Core developer and have a direct hand in deciding what work is merged to the Main/Master Branch to become official Baseline Protocol technology and specifications.
Here's a list of .
Core developers are people who take an active role in advancing the Baseline Protocol and/or related projects. They are primarily responsible for:
Contributing code or contributing to specification work in the form of PRs that are linked to open and prioritized issues
Reviewing and merging PRs into the master branch
Cutting, testing, and releasing new versions of the related Baseline projects
Working with the TSC and General Assembly to advance the Baseline Protocol
They can/should also contribute in the following ways:
Writing epics and issues to guide development
Setting up and supporting infrastructure (running demos, CI systems, community projects, etc...) that further Baseline
Working with the community to help with adoption
Presenting the project and key technologies to the public (in-person, webinar, videos, articles, etc...)
There are two ways to become a core developer: You are asked by a current core developer, or you make request to an existing core developer to become one.
With either path you become a "provisional core developer". As such you will need to show consistent contributions of code and/or specifications to the project. This can be in the form of pull requests that get merged into master. Or it can be in the form of technical specification, system architecture and related artifacts that guide the development activities of others.
All provisional core developers that focus on code development (over standards) must meet with the existing core developers and demonstrate they are capable of the following:
Running the project locally
Using the testing framework
Explaining the components of the system architecture
Walking through the code and explain the baseline process
Once the provisional core developer demonstrates their capabilities, the existing core developers will vote during the next scheduled core developer meeting to give the prospect full core developer status. Members must vote with 2/3rds majority to add a core developer. Voting that results in a tie or potentially other issue will be brought to the TSC for review.
In general, a core developer needs to:
be an expert in one or more fields related to the project
be an expert in finding and engaging the advice of other experts
show commitment over time with multiple PRs merged
be reliable in completing issues to which they have been assigned
attend the weekly core developers meetings (with occasional absences allowed)
demonstrate competency in software development or specification writing
follow the project style and testing guidelines
have a high degree of understanding of the project architecture
be welcoming to others in the community who are using the project
contribute in ways that substantially improve the quality of the project and the experience of people who use it
follow branch, PR, and code style conventions
There are weekly Core Developers meetings where members can discuss plans and issues related to the project, updates, release planning, and other related topics. Anyone may attend these meetings, but the primary participants are core developers. Core developers are required to produce meeting summaries and document decisions.
Meeting signups are maintained on the "Join the Community" page on https://baseline-protocol.org.
Any of the following ways:
You stop reviewing PR's, responding to messages, answering emails, and/or generally ghost the project.
You are disrespectful towards anyone in the community and/or involved in the project.
You are disruptive to the general process of maintaining the project, meetings, discussions, issues, or other.
You notify the other core developers you would like to relinquish your core developer status.
How is the Baseline Protocol governed ?
All repos in the Ethereum OASIS organization, including Baseline Protocol repositories, adhere to OASIS Open Projects and .
In order to ensure clean IPR that allows Baseline to remain an open technology, OASIS rules require an for persons or organizations contributing on behalf of a legal entity, and an for community contributions. You must before your pull requests to the baseline repository will be merged. to see if your company has signed the ECLA.
Here is the link to the official .
Ratified on March 18, 2020 by the .
The Baseline Protocol shall be a project within the Ethereum-Oasis project of through at least May 31, 2020. The Project Governing Board (PGB) of the Ethereum-OASIS project, which was established in 2019 and is currently supported by the EEA and the Ethereum Foundation (EF), currently consists of Dan Burnett (ConsenSys), Tas Dienes (EF), and Chaals Neville (EEA) – supported by Jory Burson (OASIS). The Baseline Project shall be supported under the existing contract with OASIS and shall require no additional fees than those already paid by EEA/EF and the parties to the Open Ethereum Project until May 31, 2020. Negotiation to continue the Baseline Project with OASIS shall be conducted between March and May 2020.
Contributions to the open source repo shall be under creative commons public domain license .
The Baseline Protocol shall be governed by a Technical Steering Committee, with 7, 9 or 11 Members, one of whom will serve as TSC Chair. The initial number of seats for the bootstrapping period (see below) shall be 11.
A quorum of two-thirds of the TSC members can conduct any vote required of the TSC during any given meeting. Disputes on whether a matter should be tabled for a different TSC meeting can be presented to the PGB of the Ethereum-OASIS project for a decision. A move to table a matter can be lodged during a TSC meeting, and it shall be tabled and submitted to the PGB with a simple majority vote. If a majority cannot be achieved during the meeting, a minimum of two members that were present at the meeting in question may dispute the matter after the fact to the PGB.
No legal entity (or set of entities controlled by a single party) shall hold more than three seats out of eleven (or two seats out of seven or nine) on the TSC during any given period.
A TSC member is eligible to lose their seat upon missing two consecutive TSC meetings or three total during a period between elections. Removal is completed by a simple majority vote of the remaining TSC members that are not being considered for removal. The PGB of the Ethereum-OASIS project has the option to consider extenuating circumstances and determine whether or not to remove a member, if the TSC itself cannot come to a determination. After removal, a special election of the vacant seat shall be held among contributors. The seat will be up for re-election at the next regular election cycle.
In all cases (7, 9 or 11 member TSC), the original three organizations (EY, MSFT, ConsenSys) hold less than 50% of the seats, during the initial six month bootstrapping period.
One TSC Member shall serve as provisional chair of the TSC for six months. On September 30, 2020, all members and the chair shall be open for new elections. Members of the community shall have voting rights based on contribution (see below):
After the six month Bootstrap Period, there shall be a nomination and election period for electing TSC members, typically from the ranks of Contributors and Maintainers. The TSC voting members shall consist of eleven (7, 9 or 11) elected members chosen by Active Contributors. An Active Contributor is defined as any Contributor who has had a contribution accepted into the Master Branch of the codebase during the prior six (6) months. The TSC shall approve the process and timing for nominations and elections held on an annual basis.
Contributors who have the ability to commit code and contributions to a repo's main branch on the Baseline Protocol. A Contributor may become a Maintainer by a majority approval of the existing Maintainers. The initial number of maintainers required to merge a pull request to master in the github repo shall be three, but may be amended to no fewer than two by a simple majority of the maintainers.
Anyone in the technical community that contributes code, documentation or other technical artifacts to the Baseline codebase or Standards Specification.
The TSC shall determine the number of maintainers required to merge a contribution into the master branch of the repo. This shall be done during the first TSC meeting. Changes to this number require a simple majority of TSC members.
This document shall be ratified by the PGB before the public launch of the Baseline Protocol. Changes to this document shall require a simple majority of the PGB.
In the event that a TSC Member resigns more than 30 days before the end of their elected term, they may nominate a replacement. The resigning member shall consider the replacement’s active participation in, contribution to, and commitment toward the Baseline Protocol community when selecting such candidate.
The position shall remain open for 30 days or until 7 days after the next steering committee meeting, whichever comes first (the open period). During that period, any other TSC member or Active Contributor to the /baseline github repo, may nominate another person.
If there is only one candidate, the TSC Chair must issue a “call for objections” to the remaining TSC members. If there is unanimous approval (no objections), then the candidate is elected. If there are one or more objections recorded, the position must be put up for a vote of the Active Contributors according to the rules of normal TSC voting as written in this document.
If more than one candidate is nominated during the open period, the position must be run through the same voting process used during normal periodic TSC elections.
The “call for objections” or the vote must be conducted during a live TSC meeting that has a quorum present or must be conducted over a seven day asynchronous voting period via any messaging or voting system approved for use by OASIS. In a “call for objections,” non-responses shall be considered assent. In a vote, non-responses must be ignored. In the event that the number of TSC Members resigning in the same open period reduces the total number of TSC members to below a quorum from the previous number, then a new full TSC must be elected following the standard procedure for periodic elections.
Over 500 people and companies signed up to be notified about the opening of the Baseline Protocol Github repository between March 4th and 19th, 2020. Over 300 of them also signed in as group members. The members represent many of the largest companies in the world, spanning several sectors. They also represent startups, students and talented individuals.
*(In the mid-1990s, the community amassed thousands of developers from many companies before any of those companies officially sanctioned their involvement. Many of the companies participating in the Baseline Protocol initiative embrace the work, some already adding baselining to their strategic plans. But anyone observing someone's participation in the baseline community should assume that they do so as individuals. Please refrain from making assertions about the intentions of their employers.)
The project governance board (PGB) is organized by OASIS and is accountable for ensuring the balance and integrity of the Ethereum-Oasis initiatives, such as the Baseline Protocol.
OASIS employs key personnel that support all the open standards and open source projects under its domain. In addition, the Enterprise Ethereum Alliance maintains a team that also supports EEA Community Projects such as the Baseline Protocol. And finally, members such as ConsenSys have assigned key personnel specifically to support and organize the community.
The technical steering committee (TSC) is accountable to the Project Governance Board for managing conflicts on merges and Core Dev self-organization. It also governs the allocation of grant money. And it meets regularly to set technical roadmaps and ensure progress of the community toward the ubiquitous implementation of the baseline protocol in systems of record everywhere.
On October 27, 2021, the following people were voted in as the TSC through October 2022 and the date of the next election's results.
The Core Developers are a subset of the contributing members of the community who have demonstrated leadership and teamwork and approve all contributions before merging them to the Main branch of the repo.
The standards team worked through most of 2021 to develop a world-class technical specification that implementers and test developers can use to ensure compliance with the baseline protocol specification. The draft will now be shepherded through the OASIS process of review and amendments, culminating in becoming an official OASIS standard. Standards writing is hard, and reviewing the writing is equally hard. This team deserves the highest regard from the community for its hard-toiling service and commitment to excellence.
The outreach team consists of writers, evangelists, stakeholders, sponsors, events organizers and marketers who work together to develop enablement materials, promote baselining in the media, events and through direct engagement with thought leaders and decision-makers.
If you want to help lead an existing project or if you intend to create a new project, post the following to the (or optionally contact the directly):
General Assembly Members will be added to the Github Org, have Read access to the main and Write access to the .
Anyone in the member community can suggest agenda items for upcoming General Assembly meetings. Contact the .
Of course, all members must respect and adhere to the community's .
Any member may request a confidential review of another member to determine whether that member should be removed by contacting any . TSC Members and any others engaged for such a review are expected to act with the highest professionalism, work in strict confidence, and keep the identity of the requesting member confidential.
Become a responsible for governing the contributions that get merged to the official master branch;
Get elected to the , accountable for architecture and governance of the core developers;
Join the accountable for proposing, prioritizing, and promoting Baseline Protocol projects.
Core Developers meet and discuss issues virtually via the #maintainers slack room in the .
Two-thirds of all current core developers constitute a quorum for a meeting involving a question of removal. A simple majority vote from core developers attending the meeting is required to remove a core developer, but the TSC may be brought in to arbitrate if the core developer to be removed or any other core developer wishes to dispute the action. (See for details.)
Governance documents from the existing .
Details on how to become a .
, , , , , , , , , , , , , and .
Member | Company |
Anais OFranc | Consianimis |
Andreas Freund | ZK & L2 Consultant |
Kyle Thomas | Provide |
Daven Jones | Provide |
Member | Company |
Melanie Marsolier | Splunk |
Jack Leahy | Provide |
Others being confirmed for listing |
Name | Organization |
Claudia Rauch | OASIS |
Carol Geyer | OASIS |
Chet Ensign | OASIS |
Paula Lowe | EEA |
Lillian Guinther | EEA |
Sonal Patel | ConsenSys |
The Baseline core CCSM package provides interfaces for general interaction with an underlying mainnet or layer-2 distributed solution.
npm install @baseline-protocol/ccsm
(npm package soon to be published)
You can build the package locally with make
. The build compiles the Baseline solidity contracts package and its dependencies using truffle.
The contracts package includes a generic "shield" contract which enforces on-chain verification of commitments before they are added to the on-chain merkle-tree. The logic encoded into the on-chain "verifier" contract can be custom code or a workgroup can choose to use a generic verifier (i.e. verifier may only require that a commitment is signed by each workgroup member). For convenience, a "VerifierNoop" contract is provided in the contracts package for testing a baseline workflow. The "no-op" verifier will return true
for any set of arguments with the proper types.
Name | Company |
End-Labs |
ZK & L2 Leader, EEA L2 WG Chair |
Golden Next Ventures |
Unibright |
SAP |
WhitePrompt |
EY |
Microsoft |
Provide |
Unibright |
ConsenSys Mesh |
Maintainer (with link to Github ID) | Company |
ConsenSys |
Provide |
EY |
Provide |
Nethermind |
Consianimis |
Finspot |
WhitePrompt |
N/A |
The core of the day-to-day work of the Baseline Protocol community is managed by members and core developers. The Technical Steering Committee (TSC) is here mainly to ensure the smooth running of the core developer team and to be accountable to the Ethereum-Oasis Project Governance Board (and to the community at large).
While it is up to any contributor what one works on, the TSC helps core developers and members by organizing planning and review sessions, roadmapping, and highlighting engineering tasks as high-priority for any given period.
The TSC's members are elected annually in September/October, concluding on the last day of that month, with the first elections held in September, 2020. The period for nominations will be announced no less than 30 days prior to elections. The method and management of the nominating process and the elections will be communicated to the community by the various channels in that timeframe.
The technical steering committee (TSC) is accountable to the Project Governance Board for bootstrapping the core developer group, stepping in to resolve any conflicts on merges or core developers self-organization.
On October 27, 2021, the following people were voted in as the TSC through October 2022 and the date of the next election's results.
The core baseline package provides unified access to internal integration middleware interfaces for systems of record.
npm install @baseline-protocol/baseline
(npm package soon to be published)
You can build the package locally with npm run build
.
Run the local test suite with npm test
.
An initial set of JSON-RPC methods have been defined for inclusion in the specification. These methods allow easy interaction with on-chain shield contracts (which contain merkle-tree fragments) and maintain full merkle-trees (along with metadata) in local off-chain storage.
Nethermind .NET client
Any client supported by the commit-mgr service. These include:
IBaselineRPC
IRegistry
IVault
The following providers of the Baseline API are available:
Ethers.js - example provider; not yet implemented but included here for illustrative purposes
RPC - generic JSON-RPC provider
Baseline core API package.
npm install @baseline-protocol/api
(npm package soon to be published)
You can build the package locally with npm run build
.
Run the local test suite with npm test
.
An initial set of JSON-RPC methods have been defined for inclusion in the specification. These methods allow easy interaction with on-chain shield contracts (which contain merkle-tree fragments) and maintain full merkle-trees (along with metadata) in local off-chain storage.
Nethermind .NET client
Any client supported by the commit-mgr service. These include:
IBaselineRPC
IRegistry
IVault
The following providers of the Baseline API are available:
Ethers.js - example provider; not yet implemented but included here for illustrative purposes
RPC - generic JSON-RPC provider
-- Co-Chair
-- Co-Chair
Name
Organization
ConsenSys
Enterprise Ethereum Alliance
Ethereum Foundation
Current Baseline Protocol TSC Chair
Unibright
Nethermind
Provide
Chainlink
Accenture
Splunk
Name
Company
End-Labs
ZK & L2 Leader, EEA L2 WG Chair
Samrat Kishor -- Co-Chair
Golden Next Ventures
Unibright
SAP
WhitePrompt
EY
Microsoft
Provide
Unibright
John Wolpert -- Co-Chair
ConsenSys Mesh
Package
Source Path
Description
@baseline-protocol/api
core/api
Core baseline API package providing unified access to the baseline
JSON-RPC module and blockchain, registry and key management interfaces
@baseline-protocol/baseline
core/baseline
Core baseline package provides unified access to internal integration middleware interfaces for systems of record
@baseline-protocol/ccsm
core/ccsm
Core ccsm package provides interfaces for general interaction with an underlying mainnet
@baseline-protocol/identity
core/identity
Core identity package provides interfaces for organization registry and decentralized identifiers (DIDs)
@baseline-protocol/privacy
core/privacy
Core privacy package provides interfaces supporting Prover
systems and and zero-knowledge cryptography
@baseline-protocol/types
core/types
Core reusable type definitions
@baseline-protocol/vaults
core/vaults
Core vault Provides management interfaces for digital authentication credentials such as keys and secrets
Method
Params
Description
baseline_getCommit
address, commitIndex
Retrieve a single commit from a tree at the given shield contract address
baseline_getCommits
address, startIndex, count
Retrieve multiple commits from a tree at the given shield contract address
baseline_getRoot
address
Retrieve the root of a tree at the given shield contract address
baseline_getProof
address, commitIndex
Retrieve the membership proof for the given commit index
baseline_getTracked
Retrieve a list of the shield contract addresses being tracked and persisted
baseline_verifyAndPush
sender, address, proof, publicInputs, commit
Inserts a single commit in a tree for a given shield contract address
baseline_track
address
Initialize a merkle tree database for the given shield contract address
baseline_untrack
address
Remove event listeners for a given shield contract address
baseline_verify
address, value, siblings
Verify a proof for a given root and commit value
Method
Params
Description
baseline_getCommit
address, commitIndex
Retrieve a single commit from a tree at the given shield contract address
baseline_getCommits
address, startIndex, count
Retrieve multiple commits from a tree at the given shield contract address
baseline_getRoot
address
Retrieve the root of a tree at the given shield contract address
baseline_getProof
address, commitIndex
Retrieve the membership proof for the given commit index
baseline_getTracked
Retrieve a list of the shield contract addresses being tracked and persisted
baseline_verifyAndPush
sender, address, proof, publicInputs, commit
Inserts a single commit in a tree for a given shield contract address
baseline_track
address
Initialize a merkle tree database for the given shield contract address
baseline_untrack
address
Remove event listeners for a given shield contract address
baseline_verify
address, value, siblings
Verify a proof for a given root and commit value
Baseline core messaging package.
NATS is currently the default point-to-point messaging provider and the recommended way for organizations to exchange secure protocol messages. NATS was chosen due to its high-performance capabilities, community/enterprise footprint, interoperability with other systems and protocols (i.e. Kafka and MQTT) and its decentralized architecture.
npm install @baseline-protocol/vault
(npm package soon to be published)
You can build the package locally with npm run build
.
IMessagingService
The following messaging providers are available:
NATS
Whisper
Parties store data in local systems of record (Mongo, Oracle, SAP, etc). Components involved in the baseline process are given CRUD access to this and conduct a series of operations to serialize records (including any associated business logic), send those records to counterparties, receive the records, sign them, generate proofs, and store these proofs to a Merkle Tree on the Mainnet.
Connectors for various systems can be found here.
The first step in baselining is setting up the counterparties that will be involved in a specific Workflow or set of Workflows. This is called the Workgroup. One initiating party will set this up by either:
Adding an entry to an existing OrgRegistry smart contract on the Mainnet;
Selecting existing entries on a universal OrgRegistry;
Creating a new OrgRegistry and adding entries to it.
A Corporate Phone Book?It is possible over time for a single instance of an orgRegistry contract on the Mainnet to become a defacto "phone book" for all baselining counterparties. This would provide a convenient place to look up others and to quickly start private Workflows with them. For this to become a reality, such an orgRegistry would need to include appropriate and effective ways to verify that the entry for any given company is the authentic and correct entry for baselining with that entity. This is an opportunity for engineers and companies to add functionality to the Baseline Protocol.
Next, establish point-to-point connectivity with the counterparties in your Workgroup by:
Pull their endpoint from the OrgRegistry
Send an invitation to connect to the counterparties and receive authorization credentials
Now the counterparties are connected securely. A walk-through of this process is here.
A Workgroup may run one or more Workflows. Each Workflow consists of one or more Worksteps.
Before creating a Workflow, you must first create the business rules involved in it. The simplest Workflow enforces consistency between records in two or more Counterparties' respective databases.
More elaborate Workflows may contain rules that govern the state changes from one Workstep to the next. These can be written in zero knowledge circuits, and in a future release, one will be able to send business logic to counterparties without constructing special zk circuits (but allowing the core zk "consistency" circuit to check both code and data).
To set up this business logic, use the Baseline Protocol Privacy Package here.
Once the business logic is rendered into provers, deploy the Workflow as follows:
First deploy a Node that has the baseline protocol RPC interface implemented. The Nethermind Ethereum Client is the first to implement this code. Alternatively, you can deploy the commit-mgr Ethereum client extension plus a client type of your choice (i.e. Besu, Infura, etc.)
Next, use the IBaselineRPC
call in the Client to deploy the Shield and Verifier contracts on-chain. This can be found here.
Now that the Workgroup and Workflow have been established, counterparties can send each other serialized records, confirm consistency between those records, and enforce business rules on the state changes from Workstep to Workstep.
An example of this is in the BRI-1 Reference implementation here. And a walkthrough of an "Alice and Bob" simple case is here and here.
Baseline core identity package.
npm install @baseline-protocol/identity
(npm package soon to be published)
You can build the package locally with npm run build
.
Each organization registered within the OrgRegistry
first generates a secp256k1
keypair and uses the Ethereum public address representation as "primary key" for future resolution. This key SHOULD NOT sign transactions. A best practice is to use an HD wallet to rotate keys, preventing any account from signing more than a single transaction.
Note that an organization may not update its address
.
Baseline core privacy package.
npm install @baseline-protocol/privacy
(npm package soon to be published)
You can build the package locally with npm run build
.
IZKSnarkCircuitProvider
The following zkSNARK toolboxes are supported:
gnark
If you want to build with the Baseline Protocol, you will find these helpful:
Resource
Quick Access
v1.0 Code base documentation
v1.0 Code base
Developers quickstart
Implementation guide
Soon
Developers slack channel
Here: #dev
Reference implementations
The Baseline Protocol is a set of techniques and specifications that can be implemented by any number of products, services and solutions.
The baseline initiative's primary mission is to standardize these abstract techniques within the OASIS open standards process. To do that, it's best to start in the crucible of real, functional code. Through the intensive development of a set of core working packages and reference implementations, the team discovers what works and what needs to be specified in the standard.
Both the core libraries and the reference implementations should be thought of as a starting point for developers, a way of understanding the techniques and getting a jump start on their own work. So long as a developer follows the standard specifications of the baseline protocol, their work should interoperate with anyone else's implementation.
If the core libraries and the reference implementations are just instances of the standard, why separate them into core/ and different reference implementations (see examples/)? The answer is that the separation makes it easier to implement changes. The core should always be stable and capable of informing specifications development, where reference implementations are more "opinionated" and can constitute different approaches that utilize specific components, infrastructure, etc., while using the same core interfaces.
The core libraries are rails for standardization while reference implementations are rails for adoption.
Companies and individuals contributing to the Baseline Protocol are not putting in effort out of a sense of charity. Each organization and individual contributor can and should be able to draw a straight line from the strengthening of the protocol to their own commercial or individual success.
To this end, Reference Implementations -- not unlike different implementations of the Ethereum Yellow Paper -- may build-in dependencies on specific products or add proprietary components and tools that might feature or advantage a company or group of companies. This is allowed -- and encouraged -- so long as the Reference Implementation does not introduce confusing naming or positioning that would give a developer the sense that those elements are essential for baselining. That said, the best Reference Implementations will endeavor to be modular so that their work can be used with a variety of components without someone having to perform "surgery" on the code.
Over time, it is expected that many implementations -- both proprietary and otherwise -- will be developed and not submitted back into the Baseline Protocol open source repository. But the community is grateful to those companies and individuals that provide their work as contributions back to open source. These contributions are stored in the examples/ folder of the repository under the naming convention below.
Note: All source code in the Baseline Protocol repository is licensed under the Public Domain CC0-Universal license; it can be forked and any of its contents can be copied and used by others at will.
On August 26, 2020, the first set of generalized core libraries for the baseline protocol were released, and the team delivered a new reference implementation to go with it. By convention, all subsequent implementations will follow the form "BRI-#".
Most/all baseline reference implementations shall include a "base" example application, reusable libraries, and sometimes relevant components, such as specific connectors.
Baseline Reference Implementation #1 (BRI-1) was developed by contributors from Provide, Nethermind, EY and others, with support and oversight from the entire Baseline Maintainer team. This implementation correctly utilizes the core Baseline Protocol abstract interfaces, which are free of dependencies on any particular set of components or proprietary systems, but it also relies heavily on tools and systems made available by Provide. Provide's Shuttle infrastructure deployment and manifold system is used by many baselining developers to make their work easier. Nethermind is the first Ethereum public client to implement the IBaselineRPC
interface (found here). NATS is a production-ready enterprise messaging layer that meets the privacy and performance requirements for baselining.
Details on BRI-1 can be found here, and the code can be found here.
Baseline Reference Implementation#2 (BRI-2) is the second "baseline reference implementation". The purpose of this project is to show a baseline stack using different services compared to BRI-1, but this stack must still comply with the baseline specificiations, therefore allowing interoperability with other baseline stacks. bri-2
introduces the commit-mgr
service to baseline
. The commit-mgr
acts as an extension to a web3 provider, which allows a variety of Ethereum clients to become "baseline compatible".
The Baseline Protocol Standard will be a set of three specifications - CORE, API and CCSM that together, provide the requirements to be satisfied to implement a compliant Baseline Protocol Implementation (BPI). The v1.0 draft for each of those documents is available on Github: .
The CORE specification describes the minimal set of business and technical prerequisites, functional and non-functional requirements, together with a reference architecture that when implemented ensures that two or more systems of record can synchronize their system state over a permissionless public Distributed Ledger Technology (Consensus Controlled State Machine) network. An overview of the CORE specification is available .
The API specification describes the Baseline programming interface and expected behaviors of all instances of this interface together with the required programming interface data model. An overview of the API specification is available .
The CCSM specification describes the requirements that a CCSM must satisfy for it to be used in a BPI as Distributed Ledger Technology or Consensus Controlled State Machine (CCSM) is the foundational enabler of a Baseline Protocol Instance with no or limited trust assumptions. An overview of the CCSM specification is available .
Distributed Ledger Technology or Consensus Controlled State Machine (CCSM) is the foundational enabler of a Baseline Protocol Instance (BPI) with no or limited trust assumptions. The requirements that a CCSM must satisfy for it to be used in a BPI are defined in the CCSM specification of the Baseline Standard. They fall into the following categories:
CCSM security is one of the most important characteristics of a CCSM. The specification sets requirements for CCSMs supported cryptographic algorithms and their implementations, node key management and verifiably secure execution frameworks.
CCSMs range in the level of privacy they support. One approach ensures that the contents of a CCSM transaction or storage are meaningless to parties not participating in an interaction. Another more stringent approach is to use a CCSM that precludes the accessibility of such information to non-participating parties. The specification sets the minimum requirement to the first approach, but the parties can agree to require that the BPI supports the second approach.
To support the required commercial transaction volume between Baseline Protocol counterparties, the CCSM utilized by a BPI should be chosen with these transaction volumes in mind. Especially, since in a public CCSM setting there will be, potentially, a significant volume of transactions competing for scarce Block space.
The specification sets requirements for when transactions connect one CCSM with another CCSM for the purpose of interoperating assets or data across BPIs. It addresses two cases - when CCSMs use the same CCSM Protocols and when they use different CCSM Protocols.
Network in this context refers to the nodes of a CCSM that form the CCSM network. A CCSM node has several components that impact the network namely its Peer-to-Peer (P2P) message protocol and its consensus algorithm.
The consensus algorithm is the most important component of a CCSM as it ensures the consistency of the network at any given time. Therefore, the requirements on the consensus algorithms are very stringent.
CCSMs most often utilize a virtual state machine (VSM) for CCSM computations of CCSM state transitions; a digital computer running on a physical computer. A VSM requires an architecture and execution rules which together define the Execution Framework.
Data integrity over time, in other words the inability to alter data once it has been committed to the state of the CCSM, is one of the key features of typical CCSMs.
Depending on the CCSM employed in the implementation of a BPI, the security requirements around integration need to be fulfilled either by the CCSM itself used for the implementation or, alternatively by the CCSM Abstraction Layer.
The draft of the CCSM specification document is available on Github: here.
The Baseline Protocol provides a framework that allows Baseline Protocol Implementations (BPIs) to establish a common (business) frame of reference enabling confidential and complex (business) collaborations between enterprises without moving any sensitive data between traditional Systems of Record.
The CORE specification describes the minimal set of business and technical prerequisites, functional and non-functional requirements, together with a reference architecture that when implemented ensures that two or more systems of record can synchronize their system state over a permissionless public Distributed Ledger Technology (Consensus Controlled State Machine) network. It covers the following:
This section provides definitions, key concepts, and overviews of the components of a Baseline Protocol Implementation compliant with the requirements of the specification. It provide implementers with guidance to be able to build and operate implementations of the Baseline Protocol not only in an informal context but also in a very formal, highly regulated context.
Identity in the context of the specification is defined as Identity = <Identifier(s)> + <associated data> where associated data refers to data describing the characteristics of the identity that is associated with the identifier(s). The approach is that every identity is controlled by its Principal owner and not by a 3rd party unless the Principal Owner has delegated control to a 3rd party.
BPI Abstraction Layers are the critical umbilical cords of a BPI to its underlying CCSM and external applications such as System of Records or other BPIs. A BPI has two abstraction layers - the BPI and the CCSM Abstraction Layer -- the specification defines a set of common requirements and differentiates between the two where necessary.
This section of the specification focuses on the concepts and requirements that describe the key capabilities to connect the BPI Abstraction Layer to the BPI Processing Layer and the correctness preserving integration of different BPIs.
Agreement execution within the context of the specification is the deterministic state transition from state A to state B of a state object in a BPI, and where the state object represents a valid agreement state between agreement counterparties.
BPI storage is a key enabler to scale BPI stacks that are either data-intensive or data sensitive or both. The specification defines BPI data storage -- outside of a CCSM -- as the storing of information in a digital, machine-readable medium where the data stored is relevant for the proper functioning of the BPI stack.
This section specifies the conformance levels of the Baseline Protocol Standard. The conformance levels aim to enable implementers several levels of conformance to establish competitive differentiation.
The draft of the CORE specification document is available on Github: .
The API Specification describes the Baseline programming interface and expected behaviors of all instances of this interface together with the required programming interface data model.
For more information about the above, please refer to the Packages section : here .
The draft of the API specification document is available on Github: Forthcoming
The Swagger API is available here.
Description
Baseline
Internal integration middleware interfaces for baselining system of record
CCSM
Interfaces for general interaction with the underlying mainnet
Privacy
Interfaces supporting general consistency, zero-knowledge cryptography protocols and secure multi-party computation (MPC)
Registry
Interfaces for the organization registry
Vault
Tools and methods for managing digital authentication credentials for User, Organization and Workgroup instances
Baseline Reference Implementation-1 using the core Provide stack.
This reference implementation of the core interfaces specified in the v1.0 release of the Baseline Protocol is called BRI-1 and has been contributed to the open source community under a public domain CCO-Universal license by individuals and companies including Provide, EY, Nethermind, ConsenSys, and others. It heavily utilizes the core Provide application stack and is compatible with Shuttle, an on-ramp for baselining.
The reference implementation is instrumented to run on the Ethereum Ropsten testnet and can be configured to run on many other public or permissioned EVM-based blockchain.
The BRI-1 "base" example codebase can be found here.
The Provide stack is a containerized microservices architecture written in Golang. The core microservices depend on NATS, NATS streaming, PostgreSQL and Redis. Note that the NATS server component is a fork that supports decentralized, ephemeral bearer authorization using signed JWTs.
Ident Identity and authorization services for applications (i.e., workgroups in the context of the Baseline Protocol), organizations and users. Read more about how authorization works here.
Vault Key management for traditional symmetric and asymmetric encrypt/decrypt and sign/verify operations in addition to support for elliptic curves required for advanced messaging and privacy applications.
NChain REST API for decentralized application building, and deploying and managing peer-to-peer infrastructure. The service provides daemons for (i) monitoring reachability of network infrastructure and (ii) creating durable, real-time event and analytics streams by subscribing to various networks (i.e., EVM-based block headers and log events). Privacy REST API Service that provides zero knowledge proof circuit management (creation and verification) to enable trust minimized enterprise ecosystems. Privacy delivers an agnostic privacy and cryptography solution, with built-in Gnark compatibility, at enterprise-scale.
PostgreSQL
Each microservice has an isolated database; each service connects to a configured PostgreSQL endpoint with unique credentials. When running the stack locally (i.e., via docker-compose
), each isolated database runs within a single PostgreSQL container.
NATS
NATS and NATS streaming are used as a fault-tolerant messaging backplane to dispatch and scale idempotent tasks asynchronously. Each NATS subject is configured with a ttl
for the specific message type which will be published to subscribers of the subject; if no positive acknowledgement has been received for a redelivered message when its ttl
expires, the message will be negatively acknowledged and dead-lettered.
Redis Caches frequently-updated and frequently-accessed key/value pairs (i.e., real-time network metrics).
Provide CLI Command line interface to build and deploy provide services programatically.
Each microservice requires the presence of a bearer
API token to authorize most API calls. A bearer
API token is an encoded JWT which contains a subject claim (sub
) which references the authorized entity (i.e., a User
, Application
or Organization
). The encoded JWT token will, in most cases, include an expiration (exp
) after which the token is no longer valid. Tokens issued without an expiration date (i.e., certain machine-to-machine API tokens) can be explicitly revoked. The standard and application-specific JWT claims are signed using the RS256
algorithm. The authorized entity may use the signed bearer Token
to access one or more resources for which the Token
was authorized. Unless otherwise noted, all API endpoints require the presence of a bearer Authorization
header.
Baseline Proxy An Integration middleware for baselining systems of record is implemented in the form of the baseline proxy which is fetched and deployed directly from the provide-cli. BRI-1 makes use of the the baseline proxy image to complete the setup of a generalized baseline ecosystem that is fully compliant with the Baseline standard.
This implementation of the Baseline Protocol leverages the Provide stack for security (i.e., authn and authz), managing key material, signing transactions, subsidizing transaction fees (i.e., if a gas/subsidy faucet is configured at runtime), etc. The various APIs in the core Provide microservices fully implement the interfaces defined in the Baseline Protocol specification (i.e., IRegistry
and IVault
interfaces).
As illustrated above, NATS is used to facilitate the handling of inbound and outbound protocol messages; in the context of the Baseline Protocol, NATS acts as a control plane to route inbound protocol messages to an appropriate asynchronous handler. Such handlers could, for example, ensure that BLINE
protocol messages represent verifiable, valid state transitions (i.e., as governed by the business process and privacy protocol) prior to updating baselined records within a system of record such as SAP or Microsoft Dynamics.
This reference implementation provides a complete, robust implementation of the Baseline Protocol specification.
It is important to note that a subset of the specification can be implemented using the core concepts demonstrated in this reference implementation without depending on the entire Provide stack.
For example, implementing only NATS as a control plane for dispatching inbound protocol messages is possible using only the @baseline-protocol/messaging
package. In such a case, the entire protocol as demonstrated within this reference implementation would be far from complete, but protocol messages could be sent and delivered.
Provide has contributed a complete reference implementation of the core interfaces as specified in the v0.1.0 release of the Baseline Protocol. This reference implementation will inform the OASIS standard.
The reference implementation runs on the Ethereum Ropsten testnet and can be configured to run on many other public or permissioned EVM-based blockchains; a complete list of supported Ethereum clients will be curated over the coming months.
Clone the Baseline repository:
Checkout the bri-1-privacy branch:
Once checked out, navigate to the /examples/bri-1/base-examples folder.
Run the following to install local packages and run the test suite against the Ropsten testnet:
In a separate terminal window, run the following command to view all container logs while the tests are running:
The reference implementation illustrates Alice & Bob, respective owners of Alice Corp and Bob Corp, establishing a workgroup and baselining an in-memory record (i.e., a JSON object) using the Provide stack.
The following high-level architecture diagram illustrates how the concepts discussed in previous sections (i.e., the Provide and Baseline Protocol architecture sections) fit together in the context of two organizations deploying the Baseline Protocol using Provide as a common technology stack and their own cloud infrastructure vendors (i.e., AWS and Azure). The reference implementation deploys these same two distinct stacks to your local machine using docker-compose when running the test suite
There are several assertion and checks that have to be carried out, in full and in a specific order to ensure that the base example can run and fully demonstrate the functionality and potential of the Baseline protocol, leveraged by the Provide stack. Subsequent sections illustrate the necessary components and their outputs as well as the rationale for their importance.
Workgroup assertions All on-chain artifacts existence and accessibility will be asserted, this includes the ERC1820 contract, the organization registry, workgroup shield, workflow circuit verifier contract, and the circuit identifier. We will also assert we have established a connection with NATS, and that there is an active subscription to the baseline.proxy subject.
Vault assertions The user will be able to register its corporation in the local registry and the on-chain registry using the default secp256k1 address. We will the ensure all vault related services are fully functional. We will assert a default vault for the organization has been created as well as a set of key pairs for the organization for babyJubJub, secp256k1, and finally ensure the secp256k1 key pair can resolve as the organization address.
Messaging assertions We will also assert we have established a connection with NATS, and that there is an active subscription to the baseline.proxy subject.
SNARK assertions TBD
Decoded workgroup Invitation
The following JSON object represents the decoded invitation JWT signed by Bob Corp and delivered to Alice. The invitation has everything Alice needs to join Bob's new Baseline workgroup, register Alice Corp with the on-chain OrgRegistry
contract and use the protocol to synchronize her local Provide stack.
This section contains important past work that has since been deprecated.
The Microsoft Excel Connector project is nearly complete and published. Stay tuned for release shortly.
⚠️ The "Baseline-SAP-Dynamics" ERP connector codebase is being integrated with this new reference implementation as a result of the v0.1
release.
Stefan Schmidt (Unibright), Kyle Thomas (Provide), Daniel Norkin (Envision Blockchain) May 21, 2020
The "Baseline-SAP-Dynamics Demo" shows a setup of different Enterprise Resource Planning Systems ("ERPs") using the Baseline Protocol to establish a common frame of reference on the public Ethereum Mainnet. The demo extends the Radish34 POC, showing a procurement process in a supply chain POC.
The open-source-available code of the development work on this demo evolved out of a Hackathon of the EEA Eminent Integration Taskforce members Unibright and Provide and is being made available alongside the Radish34 example.
The Baseline Protocol is an approach to using the public Mainnet as a common frame of reference between systems, including traditional corporate systems of record, any kind of database or state machine, and even different blockchains or DLTs. It is particularly promising as a way to reduce capital expense and other overheads while increasing operational integrity when automating business processes across multiple companies.
The approach is designed to appeal to security and performance-minded technology officers.
You can find all the details on the Baseline Protocol here.
The setting of tasks in the Community Bootstrapping Phase of Baseline roadmap include extraction of concepts out of the Radish34 demo case into the protocol level. This demo therefore wants to extend the Radish34 case by integrating off-chain systems of record, to work out major challenges and provide solutions to them. The learnings should be manifested in a reference implementation that can support standards on the protocol itself.
The Use-case shown in the demo follows this path:
A buyer, using SAP ERP, creates a Request For Proposal and sends it out to 2 of his potential suppliers
One supplier, using a Microsoft Dynamics D365 ERP, receives the Request For Proposal, turning it into a Proposal with different price tiers, and sending it back to the buyer
The buyer receives this Proposal, runs a comparison logic between different received proposals (including those of other suppliers), decides for one specific proposal, creates a corresponding Purchase Order and sends this to the supplier
The supplier receives the Purchase Order and continues the process
The shown use-case does not claim to be complete. For example, no Master Service Agreements are involved, and a productive process would continue with additional steps including Delivery Notes, Invoices and Payments, which not in the scope of this demo.
The participants discovered the following challenges to be addressed indispensably:
Establishing a non-centralized rendezvous point for multiparty business process automation, with such place also providing a solution for automating the setup of a baseline environment for each process participant (here: a supplier or a buyer) on its own infrastructure (i.e., using the participant's own AWS or Microsoft Azure credentials); and
Establishing a minimum domain model, abstracting from the baseline target objects and offering a process oriented entry point for systems of record to integrate; and
Integrating systems of record via a suitable service interface.
The proposed architecture and solutions to these challenges are presented in the next sections.
The main idea is to orchestrate the container environment for each baseline participant in a way it supports the addressing of the mentioned challenges at best.
Baseline itself is a microservice architecture, where the different components of this architecture are residing in docker containers. The existing radish demo applies a UI on top of this architecture to play through the demo case.
The architecture proposal of this demo builds upon the existing microservices, and adds layers to extract communication and integration with baseline towards an external system.
Baseline Containers: The microservices providing the Baseline Protocol and Radish34 use-case, based on this branch in GitHub, including several key fixes (i.e., unwiring cyclic dependencies within the existing Radish34 environment) and enhancements (i.e., point-to-point messaging between parties, use of a generalized circuit for baselining agreements).
Provide Containers: Provide's identity, key management, blockchain and messaging microservice API containers representing the technical entry point and translation layer for data and baseline protocol messages, and the provider of messaging infrastructure leveraged by the Baseline stack for secure point-to-point messaging.
Unibright Proxy: An extraction of the Unibright Connector (a blockchain integration platform), consisting of a simplified, context-related domain model and a RESTful api to integrate off-chain systems.
The actual system of record is integrated by on premise or cloud based integration software in the domain of the respective Operator, leading to the "full stack."
Each role in the process should run its own full-stack, connecting to the standardized Unibright Proxy by way of Shuttle.
Implementing the demo use-case as described and demonstrated herein arguably illustrates levels of technical and operational complexity that would prevent most organizations from successfully applying the Baseline approach to their processes.
A viable rendezvous point where every participant in a multiparty business process can "meet in the middle" to ensure common agreements exists between each party (i.e., agreement on the use-case/solution) and each technical team (i.e., agreement on the protocols, data models, integration points, etc.) is a prerequisite to starting any actual technical integration. Such a rendezvous point can only be considered "viable" if it:
is non-centralized; and
can automate container orchestration across major infrastructure vendors (i.e., AWS and Microsoft Azure); and
it can provide atomicity guarantees across all participants' container runtimes during protocol upgrades (i.e., to ensure forward compatibility and continuity for early adopters)
Shuttle is a bring-your-own-keys rendezvous point enabling turnkey container orchestration and integration across infrastructure vendors. Shuttle de-risks multiparty enterprise "production experiments " using the Baseline Protocol, providing continuity to early adopters of the Baseline approach. Provide is actively contributing to the standards and protocol while commercially supporting Shuttle projects.
The following complexities related to enabling the Baseline Protocol for a multiparty process such as the one illustrated by the Radish34 use-case are addressed by Shuttle as an enterprise rendezvous point:
Infrastructure
Container Orchestration
Security
Dependency Graph
Blockchain
HD Wallets / Accounts
Meta transaction relay (i.e., enterprise "gas pump")
Smart Contracts (i.e., deployment, interaction)
Organization Identity / PKI / x509
Key material (i.e., for advanced privacy, messaging, zkSNARKs
Baseline Protocol
Circuit Registry
Continuity & forward-compatibility (i.e., with rapidly-evolving protocols)
Point-to-point messaging (i.e., proof receipts, etc.)
Translation for DTO → Baseline Protocol
Baseline smart contract deployment to Ropsten testnet -- as of today, new projects are automatically subsidized by the Provide platform faucet when transaction broadcasts fail due to insufficient funds on every testnet. This same meta transaction / relay functionality will be helpful to organizations who want to participate in mainnet-enabled business processes in the future but do not want to hold Ether (i.e., when the Baseline Protocol has been deployed to the public Ethereum mainnet).*
Baseline smart contract suite intricacy, as illustrated by the contract-internal CREATE opcodes issued from within the Shuttle deployer contract. This functionality will become a standardized part of the Baseline protocol.*
Container orchestration "work product" -- each organization, using its own AWS/Azure credentials, leverages Shuttle to automate the configuration and deployment of 13 microservice container runtimes to cloud infrastructure under their own auspices. Provide also has capability of supporting this for on-premise deployments via a rack appliance.*
As the Baseline Protocol itself is still in its bootstrapping phase, it was not possible to just use a perfectly working "Baseline" endpoint, and feeding it with perfectly designed and standardized data for the use case. To establish a development environment, in which all participants (e.g. distributed software teams) can continue working and are not blocking each other. One solution to this can be a proxy.
A proxy is an intermediate layer that you establish in an integration process. The proxy only consists of simple domain model descriptions and basic operations like "Get List of Objects", "Get Specific Object" or "Create new Specific object". So we created a Domain Model for the procurement use case we wanted to show, and designed basic DTOs ("Data Transfer Objects") for all the object types involved, like RequestForProposals, Proposals, PurchaseOrders and so on. We also generated a service interface for all these DTOs automatically, and an authentication service as well.
The proxy defines the entry point for all integration partners in the use case scenario, agreeing on a common domain model and service layer. Still, every participant runs its own proxy, keeping the decentralised structure in place.
The proxy does not perform any business logic on its own (apart from some basic example mappings to make the first setup easier). The proxy leverages the local Provide stack as a gateway to the Baseline Protocol by way of this open-source NuGet package.
To help baselining SAP environments (following the Buyer role in this demo), Unibright configured the Unibright Connector (the integration platform of the Unibright Framework) to integrate and map the SAP models with the proxy automatically.
Object Mapping in the Unibright Connector
SAP Main Navigation Hierarchy for Purchasing Process, incl ME49 -> Price Comparison
Request for Quotation for 2 materials
Quotation to the Request, incl PriceScale referenced to the OrderItem
Resulting Purchase Order for Supplier ("Vendor" 100073)
Using the action Dashboard of the webversion of the Unibright connector to monitor SAP <> Proxy communication
To help baselining Dynamics 365 environments, Envision Blockchain built an extension called Radish34 for Dynamics 365 Finance and Operations. While this demo is showing the Dynamics 365 Supplier environment, it's important to note that the Radish34 extension is dually configured to support both roles (Buyer and Supplier). Below is a diagram showing the specific Dynamics 365 Finance and Operation modules used and the objects that are passing through the Radish34 extension.
Radish34 Implementation Flow Chart
After importing the extension, organizations will need to configure parameters to interact with the Unibright Proxy, setup Customer codes, Vendor codes, and setup custom Number sequences (which creates identifiers for Dynamics 365 objects).
Radish34 Parameter Module
Supplier Role
When setting up Customers, you'll need to identify customers using the Customer Setup Module and input the External code used in the Radish Module. The Value is automatically filled out by the proxy.
Customer Setup Module
You can use the Radish34 service operations feature to make periodic or on-demand calls of the UB Proxy and receive RFPs.
Radish34 Service Operations Module
You can use the Sales Quotation module to view, adjust the prices for the items the Buyer is requesting, and send the quotation.
Sales Quotation Module
The Radish 34 Outgoing Proposals module allows you to approve, and send the proposal to the Buyer
Radish34 Outgoing Proposals Module
The Radish 34 Service Operations module will periodically check for incoming purchase orders from the Buyer
Radish34 Service Operations Module (Unchecked for incoming purchase orders)
The Sales Order modules to look at the approved proposal from the Buyer and confirm the sales order
Sales Order module
⚠️ The "Baseline Microsoft Dynamics and Google Sheets" initiative is being integrated with this new reference implementation as a result of the v0.1
release.
George Spasov (Limechain), Vlad Ivanov (Limechain), Kyle Thomas (Provide)
May 21, 2020
The "Baseline Microsoft Dynamics and Google Sheets" shows establishing a common frame of reference on the public Ethereum Mainnet between Mycrosoft Dynamics and Google Sheets. The demo extends the Radish34 POC, showing a procurement process in a supply chain POC.
The open-source-available code of the development work continues the positive trend of Baseline demos showcasing the connection between two system with quite different level of sophistication.
The Baseline Protocol is an approach to using the public Mainnet as a common frame of reference between systems, including traditional corporate systems of record, any kind of database or state machine, and even different blockchains or DLTs. It is particularly promising as a way to reduce capital expense and other overheads while increasing operational integrity when automating business processes across multiple companies.
The approach is designed to appeal to security and performance-minded technology officers.
You can find all the details on the Baseline Protocol here.
Continuing the work from the SAP and D365 demo, this demo aims to showcase that two seemingly different in level of sophistication systems, can be kept in sync through the concept of baseline. This is required as more than 30% of the vendors of the big enterprises are small niche vendors lacking the resources and need to integrate a sophisticated system.
The Use-case shown in the demo follows this path:
Julia is the supply manager in the "USMF - Contoso Entertainment System USA". She deals with finding and working with suppliers. Contoso uses Microsoft Dynamics 365 to manage all its operations.
Todd is the owner of the small niche HDMI manufacturing plant called ACME. Todd is quite happy to manage his plant via good old Google Spreadsheet.
Julia is in need of HDMI cables and has found Todd.
This demo will show how "Baseline" can help Julia and Todds records stay in sync despite Todd not using a sophisticated system.
Through the Dynamics Finance and Operations module Julia creates a Request for Quotation.
She specifies the HDMI cables that she needs and adds delivery details.
Then she specifies that ACME should receive this request and sends it.
In a minute Todd sees his spreadsheet populated with the latest request from Julia.
Todd reviews the request and decides to send a formal proposal to Julia. He enters the proposal data.
Through Google Sheets add-on he connects to his baseline service and sends the proposal back to Julia.
In a minute Julia receives the offer from Todd.
As Julia is happy with Todds proposal, she accepts it and proceeds to create an agreement out of Todds proposal.
In a minute, Todd receives the agreement data in his spreadsheet through his baseline service.
With the agreement in place, Julia decides to buy some items from Todd.
Through the Purchase Orders module in Dynamics 365 she creates a new Purchase order and specifies the items and quantities she needs.
She confirms the purchase order and sends it to Todd.
In a minute, Todd receives the purchase order in his spreadsheet through his baseline service.
The proposed architecture and solutions to these challenges are presented in the next sections.
Julia D365 - The Microsoft Dynamics ERP environment of Julia
Julia Provide Shuttle - The Baseline service of Julia
Ethereum mainnet - The ethereum mainnet and the Baseline Smart Contracts needed
Todds Provide Shuttle - The Baseline service of Todd
Todds Worker Google Cloud Functions - Cloud functions that synchronize Todds spreadsheet with Todds Shuttle regularly
Todds Google Spreadsheet with Sheets Baseline Add-on - Google Spreadsheet with installed Sheets Add-on written in apps script
The Microsoft Dynamics ERP component is extension code written that connects to the Provide shuttle environments. It is written in X++ - the native language for Dynamics. It translates the native Dynamics objects to the ones required by the Baseline Service. Also regularly polls the Baseline service for news coming from the network.
The google sheets side is a regular google spreadsheet with an Baseline Add-on installed. Regular repeating cloud tasks are polling new info from the Baseline service and insert it in the correct tables. In addition through the Add-on Todd is able to connect and send data to his Baseline Service.
Baseline Reference Implementation-2
bri-2
is the second "baseline reference implementation". The purpose of this project is to show a baseline stack using different services compared to bri-1
, but this stack must still comply with the baseline standards and specificiations, therefore allowing interoperability with other baseline stacks. bri-2
introduces the commit-mgr
service to baseline
. The commit-mgr
acts as an extension to a web3 provider, which allows a variety of Ethereum clients to become "baseline compatible".
Note:
bri-2
is still a work in progress. Components such as a vault/key manager and P2P messenger need to be added to make it a complete reference implementation.
The purple/orange blocks in the following diagram have been built. The green blocks are proposed services to be added and interact with the exisiting services.
Here is a comparison of the reference implementations:
docker
docker-compose
node v12.16
npm
ConsenSys Quorum account (for vault
+ key-manager
services)
After the docker containers have successfully initialized, make the following request to workflow-mgr
in order to create a new workflow.
This request should initiate the following sequence of events. The sequencing of steps is accomplished by using NATS as a job queuing service. If successful, steps 1-8 will be completed and the workflow object will have a ZkCircuitId, Shield contract address, Verifier address, and a status of success-track-shield
.
dashboard
front-endIn order to run interact with the bri-2
stack through a browser, please run the following commands.
Note: be sure to use
node v12.16
Navigate to http://localhost:3000
on your web browser to view the dashboard
.
If you have an existing bri-2 build, run the following sequence to remove old build artifacts:
You may need to run make build
twice in order to properly compile smart contracts
Note: Environment variables default to use
ganache
as the Ethereum network
Create new commitments (hashes of JSON objects) for the Workflows
Push the commitments (hashes) into the on-chain merkle tree inside the Shield contract
P2P messenger service for communicating commitment details to counterparties
Integrated L2 to reduce mainnet gas fees
Automated integration level test suite
Codefi Orchestrate Key-Manager service integrated for Eth/EDDSA key storage and signing capabilities
Create new workflows
Automatically generate, compile, and run setup for zero-knowledge signature-checking circuit
Automatically compile newly created Verifier Solidity smart contract
Automatically deploy Shield and signature-checking Verifier smart contracts to ganach
Microservice container environment for a participant in a baselined business process.</sup>
The diagram above outlines the major architectural components. The following sections will give you an more in-depth overview of these components.
Note: Create a free ConsenSys Quorum trial account . Access the API documention for the key-manager
service .
Service Type | bri-1 | bri-2 |
Eth. client |
|
|
Key managment |
| Codefi Orchestrate |
P2P Messenger |
|
|
The original demo that led to the Baseline Protocol
Radish34 is the result of a 2019 joint development effort to build a supply chain POC. The work led to the Baseline Protocol, a way for enterprises to use the Mainnet as middleware without compromising corporate information security practices. Radish34 remains the best way to show the general baseline approach in a specific example. You can build and run the proof of concept here. And you can see how a set of companies would integrate their supply chain management systems in this interactive demo.
Supply chain, as a topic, presented an obvious choice for exploring ways to use public blockchain in business. In particular, key steps in the manufacturing procurement process were a good focus for the Radish34 project.
First, the team from EY that helped start the Radish34 project are experts on the subject and were able to articulate a highly precise and detailed set of workflows with both functional and non-functional requirements.
Second, supply chain has been a classic starting point for enterprise blockchain explorations, because it involves such a tangle of different entities that must work together. But, one of the key problems with using private or public blockchains in a supply chain context is compartmentalization.
Even when different companies are partners in a single supply chain, there are things Party A shouldn't know about the relationship or activities of Parties B and C, even if those activities support the same Workflow for the same set of goods. The Radish34 team had to find a way to coordinate companies without anyone that was maintaining the integrity of the ledger learning anything about the business activities or relationships of the participants.
It turns out, ironically, that this problem is more insidious for private blockchains than for public networks, given the relative ease of analysis an attacker with access to a node can perform on a small group of known counter-parties. And so the choice was to show how the public Ethereum network could be used in a confidential procurement scenario without anyone being able to analyze the ledger and learn about the cadence, volume or particulars of any Party's business.
Even with modern supply chain management (SCM) and order management systems, revenue recognition for a Supplier can take time. Take the case of a Buyer and a Supplier that agree to a volume discount arrangement, a tiered pricing table in their master service agreement (MSA). To keep it simple, imagine that the Supplier will give a 10% discount on orders over 100 units purchased over the next year. The first purchase order (PO) comes in for, say, 100 units. This is easy to calculate. The initial state of the agreement is zero -- no units yet ordered. No discount. After the PO is processed, the new state is 100. The next PO comes in for 100, and if everything goes right, the Buyer will receive 10% on each of those 100 units, because they all fall within the 10% discount tier. So far so good. But what if two POs come in at the same time and both calculate from a base of zero. This happens quite a lot in the real world. And it is one of the reasons why generally accepted accounting principles (GAAP) have rules about when a Supplier can recognize revenue from purchase activity. Until someone can go through and make sure that each PO calculated in the right order from the right starting position, there’s no way to be certain that the Buyer doesn’t owe more (or less) than what the Supplier thinks.
The Radish34 POC demonstrates how to ensure that each PO executes faithfully and in correct order (without double-execution from the same starting state) by using the Ethereum public network as the common frame of reference needed to be sure of what the Buyer owes more quickly. It does this without putting any of the Supplier or Buyer's data or running the shared code that executes the volume discount agreement of the MSA, and ensures that each PO is calculated faithfully against it, on any blockchain.
Instead, Radish34 implements the approach now called the Baseline Protocol. The public Mainnet is used not as a "single source of truth" (where the shared data is stored and the shared business logic is executed) but instead as a common frame of reference that enables the Supplier and Buyer to maintain their own systems of record in a verified state of consistency.
Even though this approach does not move the sensitive business data of the Buyer and Supplier's MSAs and POs to any kind of blockchain -- leaving it all right where any conservative security officer wants it, in their own internal high-security systems -- it does use the Mainnet's ability to store state and execute functions to provide this middleware service between the counterparties with great utility. And to be clear, while the approach uses well-known components such as peer-to-peer messaging, digital signatures, hashing, and zero-knowledge proofs, it’s how these elements work together that takes this from a simple "proof of existence" approach to something that can be used as a state marker, flow controller and "stealthy" private token for complex business processes in a widely distributed system environment.
The techniques used to work on the volume discount problem have other applications across the supply chain. Here are several that can reuse, more-or-less the same components and patterns as Radish34's demo Workflow, like a box of Legos, to construct solutions across the supply chain from end to end.
Say a Buyer wishes to make the same request for proposal (RFP) to 10 different suppliers. It may be desirable, or even mandatory, for the suppliers to know about the presence of the others in the RFP process and know that the same RFP was made to all of them. But it would not be appropriate for Supplier A to have access to Supplier B's bid. In fact, Supplier A shouldn't even be aware that any of the other suppliers actually bid on the RFP, even though an ideal system would link each bid verifiably to the original multi-party RFP.
The terms that govern the movement of a shipment of product from, say, a Manufacturer to a Wholesaler are usually not only confidential...their existence is confidential. And likewise, the terms between the Wholesaler and a Retailer of that product can't, in some cases, be known to the Manufacturer. However, there are cases when some or all of the product must be returned to the Manufacturer from the retail channel. Especially in highly regulated industries, the product's serialization process requires tight coordination. When the product is put back into circulation, the Retailer may need to coordinate with the Manufacturer. In this scenario, the Retailer has no visibility or connection past the Wholesaler. If this were only a "three party" problem, the Wholesaler in the middle could be used to join the two Parties on each side, but in many real-world scenarios, there are more Steps and more Counterparties involved.
In this case, what's needed is a way to ensure the overall integrity of a Workflow and run complex business rules that can connect Counterparties only when appropriate, without Parties necessarily having any awareness of the Steps that don't directly involve them. Each Party must be confident that what is coming into their Step is correct, and that what happens downstream from their activity will proceed correctly.
This is a trivial problem if a common vendor is trusted by everyone to run things. But this historically leads to balkanization as many vendors, smelling profits, convince different sets of companies to use them as the common frame of reference. Another way to handle this is to set up a consortium and jointly invest in a shared ledger solution, however, this doesn't solve the problem of the double-blind requirement for any of the firms maintaining a node that validates activities on that ledger. Even if those activities are encrypted and masked, maintainers will inevitably see that some kind of activity is happening, and that itself may give them strategic information that other participants wouldn't knowingly permit. This is particularly acute within a ledger used by a relatively small number of known companies.
Say a Supplier happens to make a product that competes with a company that not only makes products but also runs a shipping operation. If the Supplier is obliged to use the competitor for shipping to the Buyer, and if they have committed to a late-delivery discount, then it’s important for the Supplier to get the verified delivery date from the Shipper. If the Shipper knows who the Supplier is (so that it can send them the delivery date), then they can use that information to know a lot about their product's competition: volumes, peak-and-troughs, and their customer base. What if the Shipper were only aware of its Step in the Workflow. What if, short of opening up a shipping box and inspecting the item, the Shipper didn’t know who was shipping the package to that address? The trick then is getting the delivery date to the Shipper without knowing where to send it. This is where having a common frame of reference, a message bus, is useful. The Shipper can 'dead-drop' the date and a topic (a key-value-pair). The Supplier can watch the topic and grab the date. This is a good use of a public Mainnet, if we can use it without leaving traces that others could use to discover and analyze the Supplier, Buyer or Shipper's activities.
With the release of Baseline Protocol v0.1, much of the original Radish34 proof of concept has been altered, abstracted, generalized or deprecated. We include the work here for legacy and so that the history of the protocol can continue to be examined in the future. Also, the use case described below continues to be a good explanation of a use case that can benefit from baselining.
Radish34 is the result of a 2019 joint development effort to build a supply chain POC. The work led to the Baseline Protocol, a way for enterprises to use the Mainnet as middleware without compromising corporate information security practices. Radish34 remains the best way to show the general baseline approach in a specific example. You can build and run the proof of concept . And you can see how a set of companies would integrate their supply chain management systems in this .
Supply chain, as a topic, presented an obvious choice for exploring ways to use public blockchain in business. In particular, key steps in the manufacturing procurement process were a good focus for the Radish34 project.
First, the team from EY that helped start the Radish34 project are experts on the subject and were able to articulate a highly precise and detailed set of workflows with both functional and non-functional requirements.
Second, supply chain has been a classic starting point for enterprise blockchain explorations, because it involves such a tangle of different entities that must work together. But, one of the key problems with using private or public blockchains in a supply chain context is compartmentalization.
Even when different companies are partners in a single supply chain, there are things Party A shouldn't know about the relationship or activities of Parties B and C, even if those activities support the same Workflow for the same set of goods. The Radish34 team had to find a way to coordinate companies without anyone that was maintaining the integrity of the ledger learning anything about the business activities or relationships of the participants.
It turns out, ironically, that this problem is more insidious for private blockchains than for public networks, given the relative ease of analysis an attacker with access to a node can perform on a small group of known counter-parties. And so the choice was to show how the public Ethereum network could be used in a confidential procurement scenario without anyone being able to analyze the ledger and learn about the cadence, volume or particulars of any Party's business.
Even with modern supply chain management (SCM) and order management systems, revenue recognition for a Supplier can take time. Take the case of a Buyer and a Supplier that agree to a volume discount arrangement, a tiered pricing table in their master service agreement (MSA). To keep it simple, imagine that the Supplier will give a 10% discount on orders over 100 units purchased over the next year. The first purchase order (PO) comes in for, say, 100 units. This is easy to calculate. The initial state of the agreement is zero -- no units yet ordered. No discount. After the PO is processed, the new state is 100. The next PO comes in for 100, and if everything goes right, the Buyer will receive 10% on each of those 100 units, because they all fall within the 10% discount tier. So far so good. But what if two POs come in at the same time and both calculate from a base of zero. This happens quite a lot in the real world. And it is one of the reasons why generally accepted accounting principles (GAAP) have rules about when a Supplier can recognize revenue from purchase activity. Until someone can go through and make sure that each PO calculated in the right order from the right starting position, there’s no way to be certain that the Buyer doesn’t owe more (or less) than what the Supplier thinks.
The Radish34 POC demonstrates how to ensure that each PO executes faithfully and in correct order (without double-execution from the same starting state) by using the Ethereum public network as the common frame of reference needed to be sure of what the Buyer owes more quickly. It does this without putting any of the Supplier or Buyer's data or running the shared code that executes the volume discount agreement of the MSA, and ensures that each PO is calculated faithfully against it, on any blockchain.
Instead, Radish34 implements the approach now called the Baseline Protocol. The public Mainnet is used not as a "single source of truth" (where the shared data is stored and the shared business logic is executed) but instead as a common frame of reference that enables the Supplier and Buyer to maintain their own systems of record in a verified state of consistency.
Even though this approach does not move the sensitive business data of the Buyer and Supplier's MSAs and POs to any kind of blockchain -- leaving it all right where any conservative security officer wants it, in their own internal high-security systems -- it does use the Mainnet's ability to store state and execute functions to provide this middleware service between the counterparties with great utility. And to be clear, while the approach uses well-known components such as peer-to-peer messaging, digital signatures, hashing, and zero-knowledge proofs, it’s how these elements work together that takes this from a simple "proof of existence" approach to something that can be used as a state marker, flow controller and "stealthy" private token for complex business processes in a widely distributed system environment.
The techniques used to work on the volume discount problem have other applications across the supply chain. Here are several that can reuse, more-or-less the same components and patterns as Radish34's demo Workflow, like a box of Legos, to construct solutions across the supply chain from end to end.
Say a Buyer wishes to make the same request for proposal (RFP) to 10 different suppliers. It may be desirable, or even mandatory, for the suppliers to know about the presence of the others in the RFP process and know that the same RFP was made to all of them. But it would not be appropriate for Supplier A to have access to Supplier B's bid. In fact, Supplier A shouldn't even be aware that any of the other suppliers actually bid on the RFP, even though an ideal system would link each bid verifiably to the original multi-party RFP.
The terms that govern the movement of a shipment of product from, say, a Manufacturer to a Wholesaler are usually not only confidential...their existence is confidential. And likewise, the terms between the Wholesaler and a Retailer of that product can't, in some cases, be known to the Manufacturer. However, there are cases when some or all of the product must be returned to the Manufacturer from the retail channel. Especially in highly regulated industries, the product's serialization process requires tight coordination. When the product is put back into circulation, the Retailer may need to coordinate with the Manufacturer. In this scenario, the Retailer has no visibility or connection past the Wholesaler. If this were only a "three party" problem, the Wholesaler in the middle could be used to join the two Parties on each side, but in many real-world scenarios, there are more Steps and more Counterparties involved.
In this case, what's needed is a way to ensure the overall integrity of a Workflow and run complex business rules that can connect Counterparties only when appropriate, without Parties necessarily having any awareness of the Steps that don't directly involve them. Each Party must be confident that what is coming into their Step is correct, and that what happens downstream from their activity will proceed correctly.
This is a trivial problem if a common vendor is trusted by everyone to run things. But this historically leads to balkanization as many vendors, smelling profits, convince different sets of companies to use them as the common frame of reference. Another way to handle this is to set up a consortium and jointly invest in a shared ledger solution, however, this doesn't solve the problem of the double-blind requirement for any of the firms maintaining a node that validates activities on that ledger. Even if those activities are encrypted and masked, maintainers will inevitably see that some kind of activity is happening, and that itself may give them strategic information that other participants wouldn't knowingly permit. This is particularly acute within a ledger used by a relatively small number of known companies.
Say a Supplier happens to make a product that competes with a company that not only makes products but also runs a shipping operation. If the Supplier is obliged to use the competitor for shipping to the Buyer, and if they have committed to a late-delivery discount, then it’s important for the Supplier to get the verified delivery date from the Shipper. If the Shipper knows who the Supplier is (so that it can send them the delivery date), then they can use that information to know a lot about their product's competition: volumes, peak-and-troughs, and their customer base. What if the Shipper were only aware of its Step in the Workflow. What if, short of opening up a shipping box and inspecting the item, the Shipper didn’t know who was shipping the package to that address? The trick then is getting the delivery date to the Shipper without knowing where to send it. This is where having a common frame of reference, a message bus, is useful. The Shipper can 'dead-drop' the date and a topic (a key-value-pair). The Supplier can watch the topic and grab the date. This is a good use of a public Mainnet, if we can use it without leaving traces that others could use to discover and analyze the Supplier, Buyer or Shipper's activities.
This document provides a description of the procurement use case as an example/show case of utilizing the public ethereum mainnet for conducting ongoing procurement operations within the constraints of privacy and established baseline requirements to develop a scalable, repeatable, and extensible pattern for enterprises.
Based on discussions around scoping and observing general patterns in the procurement industry, a 2 party system of a buyer who intends to procure goods and a supplier (manufacturer), who can provide the finished goods in exchange for a payment. The key interactions between these 2 parties across the process flow is laid out below.
RFP (Request for Proposal):
Buyer places request for proposal inviting suppliers to participate in the procurement process and lays out the procurement needs (quantity, price, etc.).
Supplier views the RFP.
Proposal:
Supplier upon receipt of an RFP, responds back to provide a proposal effectively providing the terms by which the supplier can satisfy the procurement needs of the buyer, privy only to the buyer. As an example, we assume that these terms are a volume discount tiering structure to determine the price of an order.
Buyer views the details of the proposal.
Contract (Master Service Agreement):
Buyer uses the terms of the proposal to award a contract or an MSA to the supplier based on the proposal from supplier, and privy only to the supplier.
Supplier views the agreement, signs the agreement, and provides the signed agreement back to the buyer.
Buyer validates and confirms the agreement.
Purchase Order:
Buyer issues a purchase order to the supplier, privy only to the supplier. Buyer may choose to place an order for any requested quantity in the bounds of the MSA terms. Additionally, the terms of the MSA are used to calculate the price of a given purchase order.
Supplier views the purchase order.
All 2 party interactions are meant to be strictly privy between the parties in interaction.
Data associated with the business process that is legacy to enterprises is never used directly to interact with the blockchain platform.
Complex interactions such as negotiations between the 2 parties for any of the above processes are left out of scope of this use case coverage.
For this use case, it is assumed that RFP occurs prior to MSA, even though in reality this order varies based on the parties in interaction and other potential related terms and conditions of the agreement process.
RFP in some industries can be publicly distributed amongst multiple suppliers to avoid unfair advantage for one versus the other supplier (for example, in government use cases).
Corresponding to the above breakdown of the processes, below is a listing of technical - design/implementation implications based on the process overview and the assumptions. Moreover, the design of the system allows for a gradual build up of architectural components as we proceed from RFP to MSA to PO
RFP: Private communication of the RFPs is done via a secure off-chain communication channel.
Proposal: In reality, there may be terms associated with accepting proposals or determining them valid or not. For this use case, we assume that there is no on-chain validation of an RFP.
MSA: Co-signing of the documents is a pre-requisite for storing a hash of an MSA on chain. In addition, this process of signing should also ensure that the identity of supplier is never revealed on chain. This is done so by using ZKP (zk-SNARK) tooling, and the proofs generated off-chain that verify that the intended supplier has signed the MSA, are verified on-chain.
PO: Leveraging the terms of the MSA, a PO is created such that the inputs used to determine the price of the PO are never revealed, but can be verified on-chain using ZKP technologies like zk-SNARK.
During the initial RFP stage, there is an onset of pure off-chain communication. When in the MSA stage, Mainnet is leveraged for notarization and traceability. Finally, in the PO stage, verifiable off-chain computation provides a trigger to an on-chain process that issues POs as tokens.
The figure below describes a particular aspect of creating an MSA, and represented as a sequence diagram.
These context specific top level business objects such as RFP, MSA Contract, PO, and Invoice are loosely coupled and only contain external reference to other objects in the previous process flow. This is because it is possible to create any of these on their own (technically speaking) depending on the Organizations role and/or phase of interaction with other organizations in the Radish network. However, the application process management logic will re-enforce the proper creation order. It is expected that these objects cross system boundaries and also have on-chain representation. These objects are prominent in the User Interface and the end user can interact with them.
These object are supporting generic business contexts and the usage of the Radish system in an organization. They are required to run the Radish system but any on-chain identity is managed externally to the object (internally to the local system). These objects do NOT cross system boundaries and (other than account/identity used for messaging or on-chain transactions) do not have on-chain representation. These objects are likely reflected in the UI and the end user can interact with them, though potentially under different labels (eg User object is managed under "Account").
These objects are specific to the technology implementation. They encapsulate the delivery of objects, messages, data identity, etc... and help ensure reliability of the system as a whole and durability of the data. These objects are not indented to be used/interacted with by end-users (but could be for diagnostic purposes).
These support the direct operation of the Radish system as it is installed for a specific environment/deployment in an organizations data center/cloud. Cryptographic keys are stored separately from configuration to support separate access controls and key rotation. These objects need not be managed in an RDMS or storage system. Cryptographic keys, however, should be stored in a HSM based vault of some kind.
You can build and run your own functioning instance of the Radish34 proof of concept here.
But if you want to see what it might be like for a group of companies to use the Baseline Protocol in a wider set of supply chain operations, here is an interactive visualization:
In the supply chain story we staged in this application the user can play the role of Buyer who wants to place an order with Suppliers for X number of widgets. They also can play the role of Supplier who works with the Buyer to fulfill their purchasing needs.
The story begins with the Buyer creating a Request for Proposals and sending it to several suppliers, getting proposals back, selecting one proposal, and then Baselining the MSA contract between the Buyer and Supplier.
Ahead of playing through the story there are a few setup tasks to make the whole prototype demo experience work. You first need to run the setup process in the development environment to make sure you configure your demo environment.
We broke down the supply chain procurement process into two phases; Contracting and Ongoing. The Radish demo only captures key parts of the contracting phase at this point in development. Supporting the ongoing phase of procurement will come very soon.
Buyer context: My R&D department has given me a request for a new part X for our product Y. I need to find two domestic suppliers for this part who can fulfill my expected 12 month volume.
Browse the list of suppliers in the Global registry (via the Radish App UI) for the supplier who carries part X.
Select two suppliers from the list who can deliver the approximate volume.
Click "Draft an RFQ" for this SKU/new part for the selected suppliers and enter in my estimated Qty needed.
Click "send" deliver it to the selected suppliers.
Wait for suppliers to respond with initial MSA agreements that contain their rate tables for part X at different quantities.
Review each and "sign & return" each of the MSA contracts from the suppliers.
Done with contracting phase.
Supplier context: I manufacture newly developed Part X. I know my volume capabilities and have rate tables I can provide to prospective buyers.
Add my company global Registry (during Radish setup).
Add new Part X to my system, publish Part X to my global registry entry.
Wait for an RFQ from a buyer.
New RFQ request comes in from Buyer for Part X.
I reply to RFQ with a pre-signed MSA contract that includes my rate table.
Wait for Buyer to sign.
I am notified when buyer signs.
Done contracting phase.
Buyer context: We have an MSA with two suppliers for part X. It's now time to order the part so we can have the inventory we need to begin manufacturing.
Select from a list of my parts, the new part X for product Y.
With the part selected, create a new PO and allow me to enter the Qty/delivery dates I need.
Since this part has two contracted suppliers (from contracting phase), I can see my PO total price based on the MSA I have with each supplier, and any existing PO's I have sent are calculated into my rate.
Send PO to selected suppliers.
Wait for suppliers to accept the PO and update the PO status to "in fulfillment."
Wait for invoice.
Receive invoice from suppler, open it, and click "pay."
This PO is now completed/closed. Other POs could be still open, I am able to view the status of those.
Supplier context: We have an MSA for Part X with a Buyer. At this point I am just waiting for POs. Also, I am such a good supplier that I can always meet customer qty and time frame demands so I accept every PO I receive.
Receive notification of new PO from buyer.
Acknowledge the PO and change the PO state to "in fulfillment."
Go do the work of fulfilling the order for Part X.
Order filed, find the PO and create an invoice against it. The details are pulled from the PO.
Satisfied with the Invoice I click 'send' to deliver the invoice to the Buyer.
I wait to be paid.
I am notified when the buyer pays.
PO/Invoice phase completed.
Here are a few visualizations of how the Radish34 POC functions.
The figure below depicts a representational workflow, for the process of creating an MSA (Master Service Agreement). Refer to the Radish34 Workflow explainer, for a detailed context of the process below. This process is chosen as it addresses key interactions of the Radish34 API (built as a single tenant) with the backend microservices: ZKP, and Messenger. In addition, this process also demonstrates all the system interactions (functional view) of a procurement process (MSA) that are designed and implemented in the overall stack.
Below is a summary of the MSA workflow broken down by steps, and each step can be represented as a sequence of tasks - defined as any process interaction that is from the API container to the other containers. Please note that, a few aspects of the diagram below are still in development (particularly around the orchestration of the API services using a lightweight queue management library, "BullJS"). In the current version, only the interaction with the Messenger service is considered as a task, and efforts in the development branch depict our attempts to further modularize the key interactions, that could be modified or substituted with other custom plugins/modules. Throughout the process, any successful interaction is stored/logged on a mongo instance corresponding to each entity. There are 2 entities in this representation: Buyer (sender) and Supplier (recipient). Each entity is a separate instance of the following scale set of microservices: API, Messenger, DB (Mongo and Redis) per entity, and common microservices: UI and ZKP.
Buyer API executes methods to sign the MSA metadata under the babyjubjub scheme. This requires storage of the keys created for signing the MSA metadata, which is demonstrated to be managed locally on a development setup. From a practical standpoint, this is left to the best practices that are custom for each production setup and maintenance policies. The signed document, then is sent via the Messenger service to the supplier Radish34 instance from the buyer's instance.
Upon receiving the MSA metadata, the Supplier API extracts the signed metadata obtained via Messenger (and thereafter stored in the Supplier db instances), signs the metadata and sends back the "co-signed" document back to the buyer via Messenger (and thereafter stored in the Buyer db instances)
Buyer API then interacts ZKP service, to generate an offchain proof of execution of business logic: verify the signature of the supplier, and data validation checks on the terms of the MSA document, the volume tiering structure. This proof is then verified on chain, by invoking an RPC request to interact with the Shield and Verifier contracts deployed on chain. Buyer API communicates the successful verification (transaction hash), merkle leaf index (indicating the position in the merkle tree in the Shield contract, where the hash of the MSA document is stored on chain) to the Supplier via the Messenger service
Upon receiving the verification data from the Buyer, Supplier API could either run a confirmation check - either as extension to capture events emitted on chain during storage of hash in the merkle tree OR as an additional method to validate against data stored on the Supplier DB instance.
The figure below depicts Baseline as a set of microservices that are enabled using Baseline protocol. Radish34 is an instance of Baseline built for the procurement use case. Baseline has been formulated based on core design and product principles that are directional for Radish34 and any other customization of the Baseline protocol. Also shown is a sample instantiation of a hosted (assumed Microsoft Azure Services) application of a Baseline protocol.
The figure shows the various components of the Radish34 system. In line with the design and extensibility aspects set up in the Baseline protocol, the system architecture below also contains the components that can be replaced or modified for other similar use cases. Across the different services/integrations listed below, light green represents the components that can be replaced/modified and the darker ones represent the components that can be re-used for further customizations for similar use cases.
API: This microservice orchestrates the overall application management, and contains components that enable UI (GraphQL), blockchain, ZKP, messenger, and data integrations. In particular, API orchestration is also handled using queue management based approach.
Application Service: This represents the user facing or user interaction layer. Although the Radish34 demo shows a particular UI representation, this can be extended or integrated into external legacy data or application systems
Smart Contract Management Service: Radish34 smart contracts are managed and built as part of the deployment process. This could be customized as needed as part of an overall pipeline or can be handled on demand.
Zero Knowledge Service: Radish34 circuits represent the off-chain proofs/statements that are to be verified on chain. The service contains utilities for compiling circuits, generating keys for proof generation, generating proofs and verifier contracts
Messaging Service: Message communication is handled using Whisper, and the service also contains utilities for creating identities and pub/sub wrappers for handling message communication
Data Integration Service: This layer represents the different db components used in the Radish34 implementation to manage data across the storage instances (Mongo DB) and cache instances (Redis DB)
Additional integrations:
Custom wallet (in the form of config files) are leveraged by the API and Blockchain interaction components to transact on chain. The config files are loaded as part of the build process, and contain key metadata required for the overall application user configuration settings
Public mainnet integrations are handled through the API to in turn invoke RPC calls to the ethereum mainnet