What does a good pentesting tender look like?

Over the past few weeks, there has been a great deal of attention on the MIAUW framework. We recently published a blog about this [LINK]. Previously, we also wrote a similar article about the CCV Pentesting Quality Mark(Dutch only). These frameworks provide structure and guidance for clients and suppliers alike, and are increasingly viewed by customers as a mark of quality. However, they primarily say something about the process and the way pentesting is organised. Think of it as the BOVAG quality mark for car garages. A garage with a BOVAG mark is also seen as a stamp of quality, yet we all know that the actual quality can vary enormously between garages that carry the same mark.

What pentest frameworks focus on far less is the substantive, technical quality of the pentest itself. A tender that assesses suppliers solely on compliance with MIAUW or CCV is mainly selecting for process discipline. That is important, but it does not tell you whether a supplier is capable of finding and interpreting your most critical vulnerabilities.

That is exactly where we want this article to help. We show how to structure a tender in such a way that, alongside a solid process, you can also select for genuine craftsmanship. We assume a volume contract rather than a one‑off pentest. The goal is a selection that not only complies with a quality mark, but also delivers the best testers for your specific risks.

Too many pentesting tenders end up as a competition to see who can produce the thickest bundle of paperwork. For some parties, this feels like home turf. They score highly on frameworks, checklists and formal responses because they know the tendering game inside out. That does not automatically say anything about who has the highest‑quality pentesters in the field.

Anyone who truly wants to bring in real expertise needs to rewrite the rules of the game: less paper, more proven quality.

Start with a clear objective

A pentesting tender is not about collecting as many tick‑boxes as possible, but about procuring high‑quality testing capacity that genuinely fits your needs.

Strong tenders start with a clear description of the IT landscape and an analysis of the current pentesting process, including its strengths and weaknesses. From there, they zoom in on the different types of testing required (such as web, infrastructure, mobile, OT, cloud, etc.), the desired level of depth (automated or manual), and the required testing frequency: one‑off, periodic or even continuous.

It is also wise to make explicit whether the scope is fixed or whether suppliers are allowed to think along and make adjustments. That single choice determines whether you are mainly looking for execution capacity or for strategic sparring partners.

Keep the playing field open

Make sure pentesting does not become just one component of a large tender that also includes other services such as MDR and/or security awareness. In addition, split contracts by test type where there is a scarcity of specialists in the market, for example OT or red teaming. An added benefit is that specialists can demonstrate their strengths without having to compete in areas where they are less strong. Ask suppliers to rank their own levels of expertise per test type. This allows smaller parties to score highly where they genuinely excel, without being penalised for less relevant areas.

Look ahead

Multi‑year partnerships require the ability to adapt to new attack and defence techniques. Ask about concrete innovations, how knowledge development is safeguarded, and how teams stay up to date. Suppliers that invest structurally in training and tooling do not just deliver top quality today, but continue to do so in the future.

Also require a dedicated team for the contract, so that insights gained from pentests accumulate and are retained over time.

Assess with the right lens

Ask for references, but let go of strict sector‑specific requirements. Far more important is that references are comparable in size and complexity to your own engagement. A supplier working in a different sector can perform excellently if they have experience with systems, risks and scale that match your situation. Focusing on this gives a much better picture of relevance and proven capability. Make sure to actually call the references as well. This provides a complete picture and can be the difference between a “good” and a “very good” assessment.

Also, avoid turning too many requirements into knock‑out criteria. Strict entry requirements, such as turnover thresholds or large numbers of standard certifications, can unintentionally exclude smaller, highly specialised firms. These are often the parties that bring unique depth and creativity to their testing. Limit KO criteria to genuinely essential conditions and leave room for new, innovative players to compete.

Place the emphasis firmly on quality (80% quality, 20% price). This ensures that the evaluation phase does not only favour the usual suspects, but also allows surprisingly strong challengers to win on substance.

Avoid the paper tiger: opt for a two‑stage approach

The most effective tenders work like a two‑stage rocket. The first stage is a selection phase that sharply filters on content and relevance, without excluding small specialists. The second stage is a practical test, in which the remaining parties prove themselves by actually testing in a realistic scenario.

Five pillars for a strong selection phase

1. Team capacity and composition

The value of a good pentesting provider starts with the team:

  • Size and mix (junior, medior, senior) indicate delivery capacity.
  • A fixed, dedicated team prevents knowledge having to be rebuilt time and again.
  • Low staff turnover and recognisable faces strengthen continuity.

2. Practical experience

True value emerges when certifications and concrete cases together tell a coherent story:

  • Suppliers rank their expertise per test type (web, infrastructure, mobile, cloud, hardware, source code).
  • Practical experience in comparable environments, supported by certifications.
  • Successes and lessons learned from previous projects show that the team does more than simply tick boxes.

3. Availability and location

Flexibility means being able to deliver where and when it matters:

  • Options for on‑site work in the Netherlands and remote testing (EU / non‑EU).
  • Realistic lead times and the ability to absorb peaks without loss of quality.
  • Alignment with your internal planning or release cycles.

4. Process fit and knowledge retention

Pentesting should be embedded in the supplier’s culture and way of working:

  • Core business, not a side activity.
  • Clear quality systems per test methodology to ensure consistent results.
  • Ongoing knowledge development through internal training, up‑to‑date tooling and participation in security events.
  • Reports that substantiate findings and translate them directly into remediation actions.

5. Relevant references

References are not a formality; they are a benchmark:

  • A maximum of three client references that match your scope in size, type and complexity. Sector relevance is often overrated.
  • Contact details to directly verify how collaboration and reporting quality were experienced.
  • Lessons learned from comparable challenges and how they were resolved.

The second stage: the practical test, let quality speak

The second phase is where paperwork no longer dictates the outcome. This is the moment you see who can turn promises into tangible results. You can compare it to a test drive: all candidates get the same course, the same bends and the same road surface.

A strong practical test starts with choosing a scenario that resembles your real environment. This could be a modern web application with API integrations, a cloud component or an OT segment. Populate it with fictional but realistic data.

Opt for a white‑box approach: give testers as much relevant information as possible so they can go deep. Think of architecture diagrams, technical documentation, API specifications, test accounts, whitelisting where required and access to relevant source code. This enables pentest teams to achieve maximum depth.

The scope must be clear and well defined, with a fixed timeframe (for example, three days with two pentesters), so that all parties have equal opportunities.

Interaction during the test is at least as important as the end result. Allow testers to work on site. This gives insight into their communication, problem‑solving and collaboration. Sometimes a conversation during the testing phase says more about future collaboration than a report ever could.

The assessment of the test then follows. Evaluate the test on:

  • the quality of the findings;
  • the substantiation;
  • the number of findings;
  • the quality of remediation advice;
  • the depth achieved.

In addition, the collaboration during the test deserves attention: how do testers keep you informed of progress, and how do they translate their findings into concise, effective management summaries? Can they explain complex issues in clear, accessible language and set priorities so that executives and managers can make decisions quickly? This way, you assess not only technical sharpness, but also who is capable of clearly communicating risks and root causes, and delivering real added value in the advisory part of the pentest.

In this way, the practical test becomes a full‑fledged proof of craftsmanship, rather than just a snapshot.

A final note

Anyone who wants a strong pentesting tender must make choices: less focus on paperwork, more on craftsmanship. With a clear objective, strategic criteria, five sharp pillars in the selection phase and a realistic white‑box practical test, you significantly increase the chances that the best party wins. Not the best tender writer. This is how a tender stops being a paper tiger and becomes a true quality test.

Securify is always happy to continue the conversation on this topic. You can speak directly with Kees Stammes, Managing Director, who is keen to discuss how pentesting tenders can be made stronger in substance.

Questions or feedback?