Frequently Asked Questions

Common questions about mimicry, observations, business logic discovery, executable specifications, and legacy system transformation.

What is Mimicry?

Why is your approach called mimicry?
We observe an application and learn what it does, with the goal of implementing a new version that does the same. This is like mimicry in nature, where one animal mimics another. While the external behavior is identical (you cannot tell the new from the old), the internals are completely different.
Can mimicry be applied to any kind of system?
No. mimicry is specifically tailored for information processing systems common in enterprises and machine controllers used in industry. This includes typical web applications and data transformation hubs. It is not suited for scientific and complex numerical systems, such as in weather forecasting, physics simulation, or risk modelling.
Is mimicry a fully automated process?
No. mimicry uses a number of different tools for it’s analyses, which we configure and combine into a repeatable end-to-end process after first analyses of observations from your systems. The human stays in the loop throughout this process.
Does mimicry ignore "outliers" as many AI systems do, i.e. edge or rare cases for a system?
No, mimicry is designed to consider every observation, and if it differs from the expected behavior drawn from other observations, it is a case that mimicry specifically analyses further, rather than ignoring it.
With AI sometimes halluzinating, will mimicry also halluzinate when creating the new application?
The techniques mimicry uses are different from large language models (LLMs), with which halluzinations are associated, and mimicry does not halluzinate. However, the methods it uses are based on statistics, generalizing behavior from large sets of observations. A generalization may differ from the actual logic the original system uses. mimicry addresses this risk by generating additional test cases if there is doubt, and by flagging unconventional rules. Customers should plan to review critical parts of the new system by SMEs.
What about regulatory and compliance requirements?
The ability to demonstrate that new systems have equivalent behavior to validated legacy systems can be valuable for compliance. The link between the documented mental model and code leads to traceability from business requirement to code. Many regulated organizations find this level of validation and traceability helpful for audit and approval processes.

How It Works

What is a mental model?
The mental model is mimicry’s representation of the business logic we have discovered from observations. It includes elements such as interfaces, data types and data models, workflows, data flows, business rules, calculations, and state transitions.The mental model is designed to be understandable by both technical and business people. It is interactive and explorable - zoom from high-level overview to specific details, filter by scenario. The mental model serves as the source of truth for generating executable specifications and new implementations.
What is an executable specification?
The executable specification is the translation of the mental model into executable code, where there are direct mappings between details in the mental model and corresponding code sections. It is used to validate that the mental model provides equivalent behavior than the original system by running the same cases on both original application and executable specification. It can be used as a replacement for the original application, or as a blueprint to build a replacement in another architecture or technology.
How do you discover business logic?
We collect observations from the running system - all inputs, all outputs, all interactions with the world outside of the application. Various analysiss algorithms then identify patterns across these observations - interfaces, workflows, data flows, business rules, calculations, conditional logic, data transformations, etc. The algorithms discover relationships between data and behavior, building up a complete picture of what the system does. This discovery is systematic and covers all observed cases.
Are your workflows expressed as BPMN 2.0?
No. BPMN 2.0 is designed for business processes with human tasks and handoffs. Our workflows capture the detailed technical steps a system takes to process data—API sequences, state transitions, data transformations, and computational logic. BPMN doesn’t have the constructs we need for these technical details.
What technologies can you transform to and from?
The observation approach is technology-agnostic, apart from the underlying operating system or framework (i.e. the boundary between the application and the outside world, at which observations are collected). We have successfully analyzed systems written in PL/1, C++, C, Python, Java, or JavaScript as well as less common languages like Vala or APL. The executable specification we create uses Go for the backend and TypeScript for the frontend. The mental model can be mapped onto many modern platforms, and we are working with partners to establish corresponding generators.
What about integration with other systems?
Observations capture all external interactions - API calls, database queries, message exchanges, file operations. We document data formats, protocols, and timing dependencies. The new system can maintain compatibility with all surrounding systems. In the inapay project, the replacement was plug-compatible with all integrated systems.
How do you ensure the new system behaves exactly like the old one?
We use parallel testing - running both systems with identical inputs and comparing outputs. For the inapay project, we compared 40 million parallel observations to verify bit-level compatibility. This level of validation provides high confidence in behavioral equivalence.
How do you handle systems with complex business rules?
The analysis algorithms identify patterns across observations - calculations, conditional logic, state transitions. We discover rules that may span multiple parts of the system. The systematic analysis often reveals rules that weren’t formally documented but are critical to correct operation. One securities operations system we successfully analyzed featured more than 4'000 complex business rules using about 150 attributes in different data entities.
How large is observation data, and will it's collection slow down the system a lot?
Observation data is large, but not massive. Typical data sizes for average applications are on the order of 100 GB. The application slowdown due to observation is typically 20-30%. Details are heavily dependent on the system being analyzed. So far, it has never emerged as a stumbling block for mimicry use.

What You Need

Do you need access to our source code?
No. mimicry works by collecting observations from the running system and analyzing them to build a mental model of it’s behavior. No code is needed for this. In rare cases, we have found source code helpful to surface technical constructs mimicry had not yet learned about, or to analyze the code coverage from all observations. Both uses of source code can also be covered by other means, making access to it superfluous.
Do you need access to our systems?
Our observers need access to your test systems, and in some cases to production systems, e.g. if you do not have test cases or if you would like to execute a parallel run between the old and the new application. The observers run with your system under your control, with the captured observations being transferred to us for analysis.
Do you need access to our SMEs?
Yes. We need your technical experts to setup our observers with your application, and your SMEs to review discovered business logic, to advise on further analyses needed, and to answer questions. Compared to traditional approaches, much less of their time is needed.
How "plug-and-play" are observers?
Our observers for web applications and Linux can observe any application “as is”, and we are enhancing our Windows observers to reach similar capability. Having said that, we have successfully applied mimicry to mainframe applications and machine controllers as well, and have always found ways to observe a system.

What You Get

Can we see the business logic discovery results before committing to full transformation?
Yes! You will see initial observation and analysis results within weeks of starting. Many organizations find this analysis valuable on its own for documentation, modernization planning, or compliance purposes. You can decide whether to proceed with further analyses or transformation after seeing what we’ve learned about your system.
What do we get from the initial analysis?
You’ll see what we’ve learned about your system - interfaces, workflow patterns, business rules we’ve discovered, data dependencies, state transitions. This is real information derived from observations. It helps you understand the scope and complexity of potential transformation, and whether mimicry is the right approach for your situation.
How good is the code for the new application?
Reviews with architects from customers and partners have found the executable specification code to be well maintainable and well structured. They especially praised the consistency in style between different parts. As a plus, the discovered mental model is reflected in the code, making it easy to switch between them.
How good is the performance of the new application?
Performance depends on the target technology and implementation choices, as well as the performance of the original application. The executable specification often performs better than the original because it’s implemented in modern technology with clean architecture, without accumulated technical debt. In the inapay case, the new Go implementation required similar amounts of CPU and significantly less memory than the original Java application.
How secure is the new application?
mimicry applies current best practices for security engineering in the components it uses and the code it generates. One advantage it has is that it applies such best practices consistently across all code, reducing the chance of accidental vulnerabilities.

Common Concerns

Will mimicry not just replicate all the problems from our legacy system?
No. mimicry extracts the mental model (i.e. the business logic) from the legacy system, and nothing else. It uses the extracted model to build a new application using modern architecture and technology, and validates that it is functionally equivalent. As a principle, it does not replicate any kind of technical debt, architectural mismatches, obsolete base technologies, or obsolete functionality. It replicates the business functionality in an architecture that is ready to be extended or changed based on your needs. If you do not want to retain the business functionality, you would not use mimicry but rather replace the old system with one that has the different functionality you are looking for.
How do you handle undocumented features and edge cases?
mimicry creates a mental model that covers all cases it has seen during the observation period, independently of whether they are documented or not. mimicry can create additional test cases which specifically check edge cases, if they are not included in the test cases you have.
What if our system does things we don't want it to do?
With business logic discovery, we show you what your system actually does. If there are behaviors you want to change or eliminate, this can easily be accounted for in the observations or the mental model, so that it will not be replicated into the new application.
Can mimicry be sure to cover all cases the system supports?
In principle, it cannot. mimicry will analyze all observations provided, reflecting every case that is included in your test cases or that has happened in production over the observation period. We will discuss the observation period at project start to ensure the coverage is what you need. Furthermore, we will generate additional test cases to increase coverage. In some situations, we are also able to collect information about which pieces of code have been used during the observation period, and can add further cases to activate untouched code sections. While we cannot guarantee completeness, you can be sure that at least everything used during the observation period will work on your new system as it did on the old. Proponents of code analysis based approaches argue superiority because the code covers every case. The argument is theoretical, as the difficulty of extracting behavior from source code makes it practically just as impossible to guarantee completeness. As additional advantages of mimicry, you can filter out capabilities not needed anymore from observations, automatically avoiding them in the mental model and the new application, and you increase test coverage.
Can you handle systems with missing or incomplete documentation?
Yes. We don’t rely on documentation; we observe what the system actually does. In fact, organizations find that our business logic discovery reveals more accurate information than their existing documentation, which may be outdated or incomplete.
What happens if our system changes during the mimicry process?
Observations capture the system at a point in time. If significant changes occur, we can collect new observations. The incremental nature of the approach makes this manageable - we can validate new implementations against current behavior continuously, rather than working toward a single cutover date months in the future.
How long does this process take?
Initial observation and analysis typically takes weeks to a few months depending on system complexity and scope. Full transformation timeline depends on system size and what you’re targeting. Because we work from discovered specifications rather than interpreting requirements, timelines tend to be more predictable than traditional approaches. Each situation is different - we can discuss realistic timelines for your specific case.

Business Considerations

How much does mimicry cost?
The cost is driven by system complexity, and by whether you need some analyses only or an end-to-end systems replacement. One of the advantages of mimicry is that you can start small with a focused analysis, helping you understand your system, and then decide which further steps and investments you want to take. Analyses of small systems start as low as CHF 10'000.
What's the risk compared to traditional modernization?
mimicry dramatically reduces risk in several ways. First, you get a complete understanding before committing to full transformation. Second, our validation replaying original observations, using additional test cases, or conducting a parallel run will provide you with the comfort that for everything touched so far, the new system will behave just as your old one. Third, you have a complete documentation of everything the new system does in the mental model, ready to be reviewed by your experts.
Can we do this in phases?
Yes. You can start with observation and analysis to understand what you’re dealing with, then decide how to proceed. Transformation can happen module by module, with each piece validated independently while maintaining compatibility with unchanged parts.
What if we're not sure mimicry is right for us?
Start with an initial scoped engagement to collect observations and do preliminary analysis. This gives you concrete information about your system without major commitment. After seeing what we learn, you’ll have a much better understanding of whether mimicry is the right approach, how complex transformation would be, and what alternatives might make sense.

Getting Started

What information do we need to provide initially?
High-level understanding of your system - what it does, what technologies it uses, what problems you’re experiencing. For actual observation collection, we’ll need to discuss access to systems (test environments, available test cases, functionality only used at specific points of time such as year-end) and collaboration with people who understand the business context. We’ll define specific requirements together.
How do we get started?
Start with a conversation about your system and what you’re trying to accomplish. We’ll discuss whether mimicry seems applicable and what a first step might look like. If it makes sense to proceed, we’ll propose a scoped engagement to collect observations and do initial analysis.
What happens after the initial analysis?
You’ll see what we’ve learned about your system and have real information to make decisions. If you want to proceed with transformation, we’ll work together on implementation with continuous validation. If mimicry isn’t the right approach, you’ll at least have better understanding of your system than you had before.