May 27, 2020 Dushan Bilbija

Early Deal Flow Technology Questions

Your diligence starts the moment research starts. You’re collecting information about the target company’s technology from the onset. In the early stages of the deal flow, you’re looking for potential gaps in platform and organizational abilities that could threaten your strategy for the company. It’s also a good time to start evaluating core requirements for maturity (e.g. scalability, quality, security, resiliency).

The below questions are meant to reveal potential areas to investigate further (either in a subsequent discussion or during diligence). They cast a wide net, cover some basics and generally revolve around discussions that will allow you to start building a technology risk profile earlier in the deal flow.

Remember that while there are confidence-building observations as well as answers that indicate an area of potential concern, it is important to “see the whole board” before drawing conclusions; these are not a substitute for a full diligence.

  1. “What is the technology stack?”
  2. “How is your software delivered?”
  3. “Are all customers on the same version of your software?”
  4. “How do you split your time between new functionality, enhancements, bug fixes, and platform innovation?”
  5. “How do you prioritize your roadmap?”
  6. “How long does it take for a customer to onboard?”
  7. “What is your approach to testing? For example, are QA Engineers integrated with developers, do you have automated testing, do you track code coverage?”
  8. “Do you have regular security penetration testing, vulnerability scans? If so, what were the latest results? How quickly were issues remediated?”
  9. “Are there any regulatory requirements for compliance (e.g. PCI, HIPAA, GDPR)? If so, are you compliant? When was the last certification?”
  10. “Have you had any outages or degradations to the customer experience in the last two years?”

“What is the technology stack?”

We start with the basic building blocks of the company’s software and services. Answers here could point to potential technical debt, non-prioritized (but important and needed) platform enhancements, as well as recruiting challenges (especially with older and/or dissipating technologies).

Confidence-building:

Mainstream languages and frameworks (both back-end and front-end). If any language/framework is unfamiliar, ask about why that particular choice was made.

  • Languages: C#, Java/JavaScript, Python, C/C++
  • Frameworks: .NET, Spring, Django, Angular (not version 1.x), React, Vue

Potential concern:

  • Angular 1.x (out of development since 2018; significant effort to upgrade to newer versions);
  • Delphi (dwindling community & stagnant codebase);
  • Visual Basic 6 (no updates since 2008);
  • ColdFusion (small developer community);
  • PHP (potential of using version 5.x, which is currently out-of-support).

“See the whole board”

This is a term we use as a reminder to remove biases and engage in the entire conversation.

For example, PHP is an often-used language to this day, has a vibrant developer community and has sound frameworks that underpin modern development efforts. However, version 5.x is out of support as of 2020; most companies we reviewed last year were not yet migrating to version 7 even as late as September/October. “Are you on version 7?” is now the first question we ask when we see PHP, and will likely stay that way this year. The target here is the unsupported version, not PHP.

We will cover more of what “see the whole board” means in a later blog.

“How is your software delivered?“

Deployment configurations have a wide-reaching impact on technology, operating costs, and growth potential. Remember that certain deployment configurations are sometimes necessary (e.g. security concerns, supporting customer legacy systems) or part of the overall product strategy (e.g. monitoring on-premise networks and services).

Confidence-building:

  • Software-as-a-Service, in a public cloud (e.g. AWS, Azure, Google Cloud).

Potential concern:

  • On-premise with the customer (adds operational complexity);
  • In a company-owned/managed data center (significant cost to operate and maintain).

“Are all customers on the same version of your software?“

Here we are looking for indicators that the company has versioning under control. Similar to creating custom code for specific clients, supporting multiple versions creates complexity in upgrades, hot-fixes, and can divert teams’ attention from current development efforts. Remember that we need to see the whole board; for example, some products may have versioning issues but are very low-revenue, which minimizes impact.

Confidence-building:

  • Yes (likely answer with SaaS delivery).

Potential concern:

  • No (or anything except “Yes”).

Follow-up to a “No.” answer:

  • “Do you have visibility/metrics on which versions customers are using?” (this helps a company understand their customers’ risk profile; outdated versions can have security vulnerabilities and user experience degradations)
  • “How many versions are supported?” (ideally 1-2; otherwise this could be a support/maintenance burden)
  • “What is the oldest version supported and/or in use?” (this helps quantify the above answer; aged components and code should not be supported)
  • “Are there any customer-specific versions in use?”

“How do you prioritize your roadmap?“

While we’re looking for companies that are market/innovation-driven, the fact is that early-stage companies, for example, are usually sales-driven. Maturation starts with learning when and how to say “no”, and applying requests strategically to large groups of customers and not just one. What we’re looking for here is where they are today, and their capacity to get to the next level.

Confidence-building:

  • Use of KPIs/metrics;
  • Defined allocations for platform innovation;
  • Following an overall company vision/strategy;
  • Product Management mentioned as a specific discipline, with appropriate collaborative tools used;
  • Internal departments are involved;
  • Customers are involved (either via a Customer Advisory Board, User Groups, or similar).

Potential concern:

  • Mostly discussion (and/or CEO) driven (not collecting or factoring in data; potential of creating lesser-value functionality);
  • Sales-driven (chasing revenue could lead the company off their roadmap and strategy);
  • Not all departments represented during planning (without input from departments like Customer Support or Technology, critical operational and maintenance work would be relegated to secondary priority).

“How do you split your time between new functionality, enhancements, bug fixes, and platform innovation?“

Building on the previous question, targeting specific areas of development helps ensure attention is paid to the needs of the customers, the platform, and the company. As the company discusses allocation to various projects, the discipline of making trade-offs helps prevent scope creep, ensures proper functionality is being built and de-risks the roadmap.

Confidence-building:

Bug fixes should ideally be less than 5%, but between 5-10% is a good target; other numbers can vary, but we’re mainly looking for a firm answer and rationale behind the selection(s).

Potential concern:

  • If there’s no real target or leaders don’t have a gut-feel regarding actuals, this is a sign of potential issues of prioritization and traceability.
  • More than 15% spent on bug fixes indicates potential quality issues that need to be investigated further.

Follow-up question:

  • “What is the R&D spend as a percentage of revenue? How consistent has this been?”
    • ~10-20% is typical, though it can vary wildly for smaller companies. If higher or lower, ask what have been the the main drivers.

“How long does it take for a customer to onboard?“

Hidden complexities in the platform, data model, and usability are usually revealed in how new customers get to a point of being productive using the software. These complexities can result in additional costs in removing technical debt or engaging in a UI/UX re-design; they can also impair the company’s ability to absorb customers from competitors.

Confidence-building:

  • Less than a day (duration).
  • A defined and automated (or repeatable) process.

Potential concern:

  • Weeks/Months.

Follow-up question:

  • “What are some of the challenges?”
    • Keep in mind that everyone will say “Customer communication and/or availability”, so drill deeper. We’re looking for potential application/platform issues (e.g. “It takes a while to provision them”; “Data import is difficult”).
    • Confirm they are not doing any customized coding as part of onboarding.

“What is your approach to testing? For example, are QA Engineers integrated with developers, do you have automated testing, do you track code coverage?“

QA, in general, has a wide-ranging reach, and how a company approaches it will help gauge a number of potential areas to examine further (e.g. Security).

Confidence building:

  • While a very open-ended question, we’re looking for mentions of:
    • Dedicated QA team;
    • QA Engineers integrated with developers (meaning QA participates in planning, builds test cases, and tests during the sprint);
    • Mandated unit testing;
    • Automated testing in place.
  • Remember that low levels of automation are OK as long as there is a plan in place to address (e.g. dedicated time spent on automation per sprint, or a hiring plan that includes QA Automation Engineers).

Potential concern:

  • No dedicated QA team (means Engineers are testing their own work, which also takes time away from development efforts);
  • Manual testing (inefficient and error-prone);
  • “Offset QA Sprints” (Engineers move onto the next set of work items, while QA tests the just-completed work).

Follow-up:

  • “What areas of the platform contain the highest levels of technical debt?”
    • We’re looking for familiarity with quality issues as well as trying to gain some insight into the history of the code base; usually the oldest code is the first thing that comes to mind.

“Do you have regular security penetration testing, vulnerability scans? If so, what were the latest results? How quickly were issues remediated?“

This is a quick introduction to the company’s overall security posture. We’re looking for crisp answers and data points.

Confidence-building:

  • Yes, regularly (e.g. yearly, quarterly).
  • We’re also looking for a recent (in the last year) test/scan that also had a quick turn-around in fixing discovered issues.

Potential concern:

  • No, have never done them.
  • Do them on an ad-hoc basis (this is usually indicative of a reactive organization; preventative measures have been sidelined, which could be the company’s approach to other security areas).

Follow-up question:

  • “What sensitive data is stored, and is it encrypted?”
    • The company should be able to enumerate the type of data stored, who has access to it, and audits activity on the data.

“Are there any regulatory requirements for compliance (e.g. PCI, HIPAA, GDPR)? If so, are you compliant? When was the last certification?“

Regulatory compliance adds complexity to development, deployment and maintenance. Looming compliance certifications can have a significant impact on operational costs, and divert the team’s attention from planned development activities.

Confidence-building:

  • Current in all compliance requirements.
  • Remember that it’s important to understand what it means to be “compliant”; ask if required external certifications are in place, or if “self-certified compliance” is the standard used.

Potential concern:

  • Required compliance is either lapsed or in-progress.

“Have you had any security breaches, outages or degradations to the customer experience in the last two years?”

Another question where we’re looking for crisp answers and data points. The CEO/CTO should be able to answer this in detail; otherwise, it’s a potential leadership concern to be investigated further.

Confidence-building:

  • None;
  • 1-2 (with an explanation and description of remediation steps taken afterward).

Potential concern:

  • Multiple issues;
  • No subsequent changes to, or effect on, existing processes or monitoring (being proactive is measurably helped when lessons learned are applied and causes of issues are fixed).

A note on the “new normal”

Discussing the effects of COVID-19 on companies will be a topic to cover early in the deal flow, and likely to need a deep dive before an LOI is signed. Remember that it’s possible you will be reviewing processes, designs and team dynamics in Technology as they stand today; it will be important to understand how these have changed.

Use in the field

These questions are a starting point for a longer conversation. Some red flags can be cleared before the LOI is signed, which, in turn, helps focus the diligence efforts and raise initial confidence. Our goal is to try and find any potential issues early in the deal flow; pressure testing the investment thesis from a technology perspective starts on day one.

,