Capability + Outputs + Evidence

Dimension: Capability · Type: Foundation

A three-part rewrite formula for any CV bullet, motivation paragraph, or LinkedIn line that needs to pass a skills-based or AI-assisted screen. Name the capability, the output, the evidence.

Introduced by Olga Lehtinen (HR Development Lead, UNICC) at the The Skills Shift session of the UN Inter-Agency Career Week 2026, on 8 May 2026. Olga ran the formula throughout her segment, with concrete rewrite examples for each component and a sustained argument that AI screening “amplifies what the system is configured to value”, so the more panels move toward evidence, the less tenure alone will carry an application.

The framework

A sentence is ready when it explicitly contains all three components. Digital fluency and adaptability signals are now expected across all three. If a sentence does not surface all three, rewrite it.

When to use it

  • When rewriting a CV or motivation letter for a role with skills-based screening criteria (most UN Common System recruitment is moving in this direction).
  • When updating your LinkedIn profile and want to align with how AI screening tools and recruiters increasingly evaluate.
  • When auditing whether your existing application materials still travel in 2026, or are still anchored to a tenure-and-duties model that is fading.
  • As a structural test on any single sentence in a CV bullet or cover letter that names something you have done.

The three components

Capability. What you can do. Phrased as a transferable capability, not a duty title or a tenure claim.

  • Weak: “Five years in programme support.”
  • Stronger: “Synthesised input that informed decision-making and aligned stakeholders.”

The shift is from describing where you sat (duty list) to describing what you can do (capability), in language that travels across roles and organisations.

Outputs. What you produced or changed. The deliverable, the decision, the metric that moved.

  • Weak: “Coordinated cross-agency efforts.”
  • Stronger: “Coordinated inputs from six partners to deliver a consolidated brief used in decision-making.”

The output makes the capability concrete. It also gives the reader something to attach the credibility to: the brief existed, the decision happened, the alignment shifted.

Evidence. What proves it. Numbers, names, deliverables, endorsements, durations, scale.

  • Weak: “Strong communication skills.”
  • Stronger: “Translated technical analysis into briefings used by three regional bureau directors and cited in two senior leadership decisions in 2024.”

Evidence is what survives the scrutiny that comes later in the process. AI screening tools surface it; recruiters check it; selection panels probe it.

On digital fluency and adaptability signals

The session was emphatic that, regardless of function (HR, child protection, policy, administration), capability bullets now need to surface signals of:

  • Digital fluency: “Use data and AI tools weekly to analyse, summarise, and improve inputs” beats “completed analytics training”. Active practice, not certification.
  • Adaptability: capacity to learn new responsibilities quickly, demonstrated through specific transitions across teams, agencies, or thematic areas.

These are not additional bullets; they are an expectation that runs across all three components above. Every bullet that touches your work should now also signal how digital tools were part of how you did it, and how you adapted when the context shifted.

Steps

  1. Pick one bullet from your CV or one sentence from your cover letter. The shorter the better for the first run.
  2. Run the three-part test.
    • Does it name a capability? Specific, transferable, in capability language not duty language.
    • Does it name an output? A concrete deliverable, decision, or change.
    • Does it name evidence? Numbers, named contexts, durations, endorsements.
  3. Strengthen the weakest of the three. Most bullets fail on evidence. Some fail on capability (still duty-anchored). Few fail on output but plenty are vague about it.
  4. Add the digital fluency signal where it is honest. If the work involved data tools, AI prompts, dashboards, or automation, name them. Not as decoration; as part of the capability.
  5. Repeat for every bullet. A CV with five bullets that pass the test beats a CV with twelve that do not.
  6. Pair with JD Colour-Coded Breakdown. The capability language should align with the JD’s terminology where it is honest.

Worked example

A staff member with 5 years in programme support rewrites three CV bullets using the formula.

Bullet 1, before:

“Five years in programme support, helping prepare meetings and following up on action items.”

Bullet 1, after:

“Synthesised input from four implementing partners and three internal teams into a weekly decision brief that the country director used to set the agenda for senior leadership meetings; reduced the prep cycle from three days to one across 18 months.”

Capability: synthesis across partners and internal teams. Output: weekly decision brief used by the country director. Evidence: four partners, three teams, three days to one, 18 months.

Bullet 2, before:

“Strong communication and coordination skills.”

Bullet 2, after:

“Coordinated inputs from six partner agencies to deliver a consolidated programme brief used in two regional bureau decisions in 2024; built the brief workflow on a shared template that two other country offices subsequently adopted.”

Capability: cross-agency coordination. Output: consolidated brief used in named decisions, plus the workflow template. Evidence: six partners, two decisions, 2024, two adopting offices.

Bullet 3, before:

“Completed Microsoft Excel and Power BI training.”

Bullet 3, after:

“Use Excel and Power BI weekly to analyse partner performance and surface delivery delays; built a delay-flagging dashboard now in routine use across the country office.”

Capability: data analysis and surfacing operational signals. Output: a delay-flagging dashboard. Evidence: weekly use, “in routine use across the country office”. Crucially, the bullet now signals digital fluency through active practice rather than completion of a training.

The same person’s CV before was three duty-and-tenure bullets that AI screening would surface only if it was configured to value years. After, it is three capability-output-evidence bullets that surface in any system configured around skills.

Pitfalls

  • Skipping evidence because it feels like bragging. Numbers, names, and durations are not bragging; they are how the system distinguishes specific contributions from generic claims.
  • Inventing evidence to fit the formula. The formula is a rewrite test on real work, not a creative-writing prompt. If you do not have evidence, the capability claim is premature; either build the evidence in your current role or pick a different capability.
  • Stopping at capability and output. This is the most common partial pass. The bullet sounds modern but still does not survive AI screening that has been configured to look for evidence.
  • Naming a tool without a workflow. “I use Power BI” is not yet a capability sentence. “I use Power BI weekly to analyse partner performance and surface delivery delays” is.
  • Treating tenure as evidence. “Five years in” is duty data, not evidence of capability. Evidence is what changed because of you, not how long you were there.
  • Listing certifications instead of digital work habits. A certificate from 2022 is weaker than “use this tool weekly” in 2026. The session was direct on this.
  • Pasting URLs in CVs. Speaker’s specific advice from the same session: name the tool, the workflow, the impact, but generally do not paste a URL because recruiters will not click them. The evidence has to live in the prose.

When not to use it

When the application format is structurally tenure-anchored (some legacy UN system templates that ask “how many years in this role”). The formula still applies in spirit; you adapt to the format constraints while smuggling capability-output-evidence language into the open fields.

When the role is in a tradition that does not use AI screening and reads every CV by hand (some smaller IGOs, some bilateral missions). The evidence still helps, but the urgency to surface AI fluency may be lower.

A note on the source

The three-part formula is the speaker’s distillation of how UN Common System recruitment is moving (skills-based hiring, AI-assisted screening, evidence-based panels). The formula itself is consistent with broader skills-based hiring literature and adjacent frameworks; the Skills-in-Use CV Pattern from Day 3 Session 2 is a closely related writing pattern. The specific framing of digital fluency and adaptability as cross-cutting signals expected across all three components is the speaker’s contribution.

How I use it

Personal note pending. Davide to fill.


Notes compiled by Davide Piga. Last updated 2026-05-09.