Closing the Automation Feedback Loop – Series 4/7
Blog from 4/1/2026
Closing the Automation Feedback Loop
In hybrid SAP landscapes, closing the automation feedback loop is not simply a matter of connecting systems. Most environments already exchange data between automation platforms such as Tosca or Worksoft and cloud-based systems like SAP Cloud ALM, Xray, or Azure DevOps. Results are transferred, status is synchronized, and evidence is attached. From a tooling perspective, everything appears connected.
But this is not the challenge. The challenge is that the feedback loop cannot emerge naturally from these systems. It must be constructed — under technical constraints that fundamentally shape what is possible.
Constraint 1: No Native Event Model
Enterprise test automation tools were not designed as event-driven systems. They do not emit execution events, nor do they provide native webhook capabilities. Even in well-configured network environments, they cannot simply notify other systems when execution has completed or changed state.
As a result, integration cannot rely on event-driven patterns. It must actively determine what has happened. In practice, this means polling. But polling itself is not the issue.
The real complexity lies in what the integration layer must ‘understand’:
whether execution is actually complete
which result is relevant in case of reruns or partial runs
how to detect meaningful state changes
how to avoid duplicates
how to preserve correlation to test scope and change context
This is not data transfer. It is interpretation.
And this logic cannot realistically live inside endpoint systems such as SAP Cloud ALM, Xray, or Azure DevOps. These systems are not designed to reconstruct execution events from external tools. Even where extensibility exists, implementing such logic leads to heavy custom code, brittle behavior, and duplicated effort across systems.
Constraint 2: Execution Visibility Cannot Be Transferred
Even if execution results can be detected, execution itself cannot be moved. Automation platforms such as Tosca or Worksoft are not lightweight services. They are stateful, highly interactive environments, typically implemented as agents with local GUIs and deeply embedded in SAP execution logic. Execution is not just a result being produced — it is a sequence of interactions, validations, and runtime decisions that unfold inside the tool. This execution context includes step-level logic, runtime navigation through SAP transactions, object interaction, validation behavior, and detailed debugging and trace capabilities. It is tightly coupled to the execution engine and its internal state.
As a result, it is neither exposed as structured, portable data nor accessible via a simple URL. It cannot be meaningfully transferred into cloud systems. SAP Cloud ALM, Xray, and Azure DevOps are designed to manage test artifacts, not to replicate execution environments. Attempting to reproduce execution visibility there would require rebuilding large parts of the automation tooling itself.
This is neither feasible nor necessary.
What follows is a structural limitation: governance systems operate on a reduced representation of execution — a projection rather than the full reality.
The Consequence: The Feedback Loop Must Be Engineered
Taken together, these constraints lead to a clear conclusion. The feedback loop cannot be event-driven, fully visible or natively supported by the tools involved.
It must be engineered explicitly. Not by extending each endpoint system. And not by adding more point-to-point integrations. But by introducing a layer that can handle what the existing tools cannot.
Core Capabilities of an Integration Platform for Hybrid Testing
But hybrid SAP testing is not a generic integration problem.
Closing the automation feedback loop in hybrid SAP landscapes requires an integration platform with capabilities that go beyond traditional API or data integration.

It must be able to operate across fundamentally different environments — on-prem execution systems and cloud-based governance platforms — while preserving execution semantics, context, and consistency.
Such a platform must provide a set of core capabilities.
Execution-Aware Event Construction
Because automation tools do not emit native events, the platform must construct them. This requires detecting execution state changes, interpreting tool-specific result structures, and identifying when execution is truly complete. The platform must turn implicit behavior into explicit, usable events.
Intelligent Polling and State Interpretation
Polling is unavoidable — but it must be meaningful. The platform must detect deltas between polling cycles, correctly interpret execution states, handle reruns and partial executions, and maintain a consistent view of execution progress.
Polling becomes a mechanism for reliable state management, not just data retrieval.
Context Preservation Across Systems
Execution results only become meaningful when their context is preserved. The platform must maintain the relationship between test scope, execution, and — where applicable — the underlying change. This ensures that results remain traceable and usable beyond simple reporting.
Structured Representation of Execution Outcomes
Execution cannot be transferred in its entirety, but it can be translated. The platform must extract and structure the relevant parts of execution — outcomes, key evidence, and execution metadata — and present them in a way that can be consumed by SAP Cloud ALM, Xray, or Azure DevOps without requiring those systems to replicate execution logic.
Clean Integration into Governance Systems
The platform must integrate with cloud systems without overloading them.
This means avoiding custom logic in endpoint systems, ensuring consistent updates, and delivering normalized, interpretable execution outcomes rather than raw data.
Respect for Tool Boundaries
A viable solution must respect the nature of the landscape.
Execution remains in the automation tools. Governance remains in cloud systems. Execution context stays where it belongs.
The platform acts as a mediator — not a replacement.
Types of Integration Platforms
Not all integration platforms are the same. In practice, two categories have emerged: general-purpose platforms and purpose-built platforms designed for specific domains such as hybrid ALM.
General-Purpose Integration Platforms
General-purpose integration platforms (e. g. SAP CPI / SAP Integration Suite) are designed to connect systems, expose APIs, and move structured data across system boundaries (e. g. sales orders). They are highly effective when integration follows predictable patterns — for example when business documents are exchanged or services are orchestrated via well-defined interfaces.
In hybrid SAP testing environments, however, the nature of the data and interactions is fundamentally different. Execution does not produce clean events or stable payloads. Instead, it produces evolving states that must be interpreted before they become meaningful.
General-purpose platforms can technically be extended to handle this. But in practice, this requires implementing significant amounts of custom logic on top of the platform:
custom polling and delta detection
execution state interpretation
tool-specific adapters
correlation logic across systems
In other words, the platform provides the connectivity — but not the intelligence required to understand execution.
As a result, the core logic of the feedback loop must be built from scratch, typically as customer-specific implementation. This leads to increased complexity, duplicated logic across integrations, and solutions that are difficult to maintain over time.
Purpose-Built Platforms for Hybrid ALM Orchestration
This is where a different class of platforms becomes relevant.
Purpose-built orchestration platforms such as Conigma Connect are designed specifically for hybrid ALM environments. They do not treat execution data as generic payload, but as domain-specific information that requires interpretation and structure.
What distinguishes these platforms is not connectivity — but awareness.
They are built with an understanding of how execution behaves, how results evolve over time, and how context must be preserved across system boundaries. Instead of requiring customers to implement the necessary logic themselves, they provide it as part of the platform.
In simple words: they do not transport data, the understand the payload and use it to orchestrate SAP ALM.
In practice, this is reflected in a focused set of capabilities:
execution-aware polling and state detection, without requiring custom implementation
built-in logic for interpreting execution states and handling reruns or partial results
preservation of context between test scope, execution, and (where available) change
structured propagation of results into SAP Cloud ALM, Xray, or Azure DevOps
clear separation between execution systems and governance systems
Rather than pushing raw data into target systems, these platforms deliver normalized, interpretable execution outcomes — in a form that can be directly used for test management and governance, without requiring additional custom logic in those systems.
What This Looks Like in Practice
With a purpose-built orchestration platform such as Conigma Connect, the architecture becomes significantly cleaner. Test automation remains in its native environment, while cloud-based systems continue to provide visibility and governance. Security constraints are respected, and no custom logic needs to be embedded in SAP Cloud ALM, Xray, or Azure DevOps.
At the same time, execution outcomes are no longer loosely synchronized across systems. They are actively detected, interpreted, correlated, and propagated in a consistent way. What previously required manual coordination or custom logic becomes part of a controlled and repeatable mechanism.
The feedback loop is no longer implicit. It becomes a clearly defined capability within the architecture — one that ensures execution results are not just visible, but consistently usable across system boundaries.
Bringing It Together
Closing the automation feedback loop in hybrid SAP landscapes requires a shift in approach. It starts with accepting the technical constraints that define how execution systems behave and what they can expose. It continues with externalizing the logic that cannot realistically live inside the endpoint systems. And it ultimately depends on using a platform that understands the domain — not just the data.
Not all integration platforms are designed for this. Only those built specifically for hybrid ALM orchestration can provide the necessary capabilities. They do not simply connect systems; they ensure that execution outcomes are interpreted, structured, and made usable across them.
What Comes Next
At this point, execution results are no longer just visible — they are structurally available across the landscape. They can be consistently interpreted, correlated, and consumed where decisions are made.