Opportunity

AI Pioneer Interdisciplinary Fellowships 2026: How to Win Up to £1,750,000 to Build Domain AI Capability (Invite Only)

If you are a seasoned researcher with deep domain expertise but not a background in core AI methods, this fellowship is made for you—provided you’ve already cleared the outline-stage hurdle and received an invitation to submit a full proposal.

JJ Ben-Joseph
JJ Ben-Joseph
📅 Deadline Feb 24, 2026
🏛️ Source UKRI Opportunities
Apply Now

If you are a seasoned researcher with deep domain expertise but not a background in core AI methods, this fellowship is made for you—provided you’ve already cleared the outline-stage hurdle and received an invitation to submit a full proposal. Think of this as a rare bridge grant: it’s explicitly designed to fund established investigators who want to bring serious AI capability into their field, rather than funding AI specialists to apply their usual tools to familiar problems.

This is not an entry-level fellowship or a seed award. The full economic cost (FEC) ceiling is a hefty £2,187,500, and UK Research and Innovation will cover 80% of that—so in practice you could receive up to £1,750,000 in UKRI funding. Projects can run up to three years and are expected to begin on 1 October 2026. That money buys time, people, infrastructure and, if you plan it right, a durable AI capability in your group or department.

Read on for a practical, no-nonsense guide to whether this is the right fit, how to shape a persuasive application, what reviewers will pay attention to, and the specific steps to get your full proposal ready if you’ve been invited.

At a Glance

DetailInformation
Award TypeFellowship (interdisciplinary, invite-only full application)
Maximum Full Economic Cost (FEC)£2,187,500
UKRI Contribution80% of FEC (up to ~£1,750,000)
Project DurationUp to 3 years
Project Start Date1 October 2026
EligibilityEstablished researchers across UKRI remit without a background in core AI research; invitation required after successful outline application
Key Funder BodiesEPSRC, MRC, BBSRC, ESRC, STFC, NERC, AHRC
Deadline for Full Application24 February 2026 (16:00) — invitation required
Contactsupport@funding-service.ukri.org; TFSchangeEPSRC@epsrc.ukri.org; ai.robotics@epsrc.ukri.org
Apply athttps://www.ukri.org/opportunity/turing-ai-pioneer-interdisciplinary-fellowships-outline-applications/

What This Opportunity Offers

This fellowship provides more than a paycheck. It funds the creation of a sustained AI capability inside a research group or across a department. With up to £1.75m of UKRI money (80% of the FEC cap), you can hire several full-time researchers, bring in data engineers, purchase significant compute and storage, fund secondments with AI groups, or set up a demonstrator that proves the approach. You’re buying capacity: people who know how to build robust AI pipelines, governance frameworks to manage sensitive datasets, and the time to migrate from ad hoc scripts to reproducible, production-quality systems.

Funded projects can be truly interdisciplinary. The list of partner councils makes that clear: medical, biological, social, physical and environmental sciences, plus arts and humanities, are all in scope. That means a cultural historian using machine learning to analyze digitized archives, an ecologist building AI-driven remote sensing pipelines, or a clinician creating clinically validated AI decision-support tools could all be eligible—if they build the right team and the proposal demonstrates how domain expertise and AI capability will be combined.

Finally, this grant expects impact beyond a single paper. Reviewers look for plans showing that the new capability will remain in the host institution after funding ends: training programs, reusable software, open datasets (where appropriate), policy briefings, or spin-out-ready components. In short, you’re expected to leave the field—and your institution—in a stronger position.

Who Should Apply

This fellowship is aimed at established researchers who do not come from core AI research but who want to apply advanced AI methods within their domain. “Established” generally means mid-career or senior academics who can show a track record of leadership, funded projects, and sustained research outputs. If your career so far is built on domain expertise—epidemiology, oceanography, archaeology, clinical trials, linguistics, etc.—and you can explain a credible plan to embed AI work into that context, this is for you.

Examples of good fits:

  • A public health professor who wants to develop AI models that improve outbreak detection using heterogeneous health and mobility data, and who will hire AI specialists to handle model design and validation.
  • A palaeographer aiming to create AI-assisted transcription tools for fragile manuscripts, pairing computational linguists and computer vision engineers with domain curators.
  • An environmental scientist proposing an integrated AI workflow for long-term satellite data that supports policy-ready predictions of flood risk.

If your background is in core machine learning or AI methods (e.g., you have a long history of publications in ML venues and supervise PhD students in core AI topics), this call is not aimed at you. Conversely, if you have no interest in taking responsibility for building capacity inside your group—if you just want a short-term collaboration—this also isn’t the right fit. The funder wants durable capability built and owned by a non-AI domain expert.

Insider Tips for a Winning Application

This section matters more than any single paragraph in your case for support. Treat these as hard-won shortcuts.

  1. Build an explicit team structure that separates domain leadership from technical delivery. Reviewers are wary of PIs who say “I’ll learn machine learning myself.” Show a clear division: you lead the domain questions and have co-investigators or senior staff to deliver AI model development, engineering, and software best practices.

  2. Cost realistically, then justify. With an FEC cap, you have to be honest about what compute, storage and personnel cost. Detail staff effort (FTEs), expected cloud or HPC costs per annum, and a budgeted line for software engineering and maintenance. Funders dislike vague “compute required” lines.

  3. Show pathways to sustainability. How will the capability survive after the 3-year term? Plans might include embedding trained PDRAs into permanent roles, creating service offerings for the department, securing industry co-funding, or publishing tools under permissive licences with active maintenance agreements.

  4. Prioritize data governance and ethics from day one. If your plan uses health or personal data, include data protection and ethical approvals, secure storage design, and a plan for bias monitoring. This must be more than a paragraph; it should be a work package with milestones.

  5. Include realistic timelines and deliverables. Break the project into phases: set-up (hiring, data agreements), model development, validation and deployment, and knowledge transfer. Assign deliverables and review points. Reviewers reward clarity.

  6. Demonstrate institutional and partner commitment. Letters that say “we will host compute” are weak. Strong letters specify costed commitments, matched support, or binding secondments. If you have an industry partner, include an explicit statement of what they will provide (datasets, compute credits, user testing).

  7. Use pilot evidence. Even a small proof-of-concept—one dataset, a baseline model and an evaluation metric—can make your plan credible. It shows you understand workable approaches and the data realities.

  8. Plan for training and capacity building. Include training modules for postdocs and PhD students, workshops for departmental staff, or a plan to create reusable teaching materials.

These tips should be woven through your application rather than dumped into a single section. Make each tip visible in the relevant work package, budget note, or letter of support.

Application Timeline (Work Backward from the Deadline)

Deadline for invited full applications: 24 February 2026 at 16:00. Projects must start 1 October 2026. Given these dates, work backward and account for institutional clearance times.

  • January 2026: Final edits, sign-offs, institutional approvals. Many universities have internal deadlines two weeks earlier—check with your research office.
  • December 2025: Complete first full draft of case for support, budget and work packages. Circulate to internal reviewers and contacts in AI groups.
  • November 2025: Secure letters of support, confirm co-investigators and commitments of compute or matched funding.
  • October–November 2025: Finalise pilot results and data access agreements. Start ethics and data governance approvals where needed.
  • September 2025: Draft budget and engage the university finance office to calculate FEC and the 20% institutional contribution.
  • August 2025: If you have been invited after an outline, use this period to convert outline feedback into concrete changes. If you haven’t had outline feedback yet, clarify timelines with the programme contacts.

Submit at least 48 hours before the deadline to allow for portal problems. Then allow August–September 2026 for project set-up before an October 1 start if awarded.

Required Materials and How to Prepare Them

The exact document list will follow UKRI’s portal templates, but expect to supply a comprehensive case for support (often 6–12 pages), a detailed budget and justification, CVs/biographical statements for key personnel, data management plan, ethics statements, letters of support, Gantt chart or project plan, and a pathway to impact statement.

Preparation advice:

  • Case for Support: Write this as a narrative that ties the problem, the novelty of applying AI in your domain, and the plan to build capacity. Use headings like Aims, Background, Methods, Work Packages, Risk and Mitigation, and Impact.
  • Budget: Work with your finance office to produce FEC and ensure your institution understands the 20% non-UKRI contribution. Break down personnel by role, show duration and %FTE, and include travel, consumables, and compute costs.
  • Data Management: Include storage, backups, metadata standards, anonymisation or pseudonymisation strategies, and long-term curation plans.
  • Letters of Support: Ask partners to be precise—state exactly what resource they will provide and for how long.
  • CVs: Emphasize leadership, relevant projects, and evidence of managing complex, interdisciplinary teams.
  • Ethics approvals: If you need NHS or institutional approvals, show timelines and preliminary engagement with review boards.

Prepare these documents to be modular so you can reuse language across sections without repetition.

What Makes an Application Stand Out

Reviewers will be looking for three things: clarity of the domain problem, plausibility of the AI approach and engineering plan, and evidence that the capability will persist. Standout proposals do these well.

Clarity of problem: Describe a tightly scoped research question with measurable outcomes. Ambitious grand aims without clear deliverables are penalised.

Plausibility of technical plan: Be explicit about models, evaluation metrics, validation datasets, and failure modes. If your data are noisy or sparse, explain how you will handle that—through augmentation, transfer learning or expert-in-the-loop approaches.

Engineering and reproducibility: Applications that show plans for continuous integration, version control, containerisation, and reproducible pipelines score well. Demonstrating that outputs will be usable by non-experts (through wrappers, APIs or documentation) helps.

Sustainability and training: Proposals that include detailed training plans and concrete sustainability pathways—e.g., shared services, course modules, or commercialisation routes—are favoured.

Institutional buy-in: Strong, specific commitments from host institutions or collaborators (funding compute credits, offering permanent positions contingent on outcomes) elevate an application above those with only polite support letters.

Common Mistakes to Avoid (and Fixes)

  1. Under-budgeting compute and engineering: Many applicants budget only for people and forget that production-level AI requires substantial compute and engineering time. Fix: include a realistic annual compute budget and an engineering FTE.

  2. Treating AI as a black box: Vague statements like “we will use machine learning” won’t convince reviewers. Fix: specify model classes, evaluation approaches, and validation datasets.

  3. Lack of data governance: Especially for health or social data, failing to show secure handling and approvals is fatal. Fix: draft a data management plan early and engage data custodians.

  4. No sustainability plan: Projects that disappear after funding end lose points. Fix: articulate concrete transfer routes for staff, code, and services.

  5. Weak letters of support: Generic cheerleading is not evidence. Fix: request letters that detail resources, time commitments and named staff.

  6. Ignoring training: If you won’t train the next generation, reviewers may ask why you’re building capability at all. Fix: include specific workshops, modules and trainee roles.

Frequently Asked Questions

Q: Do I have to be a UK citizen? A: Eligibility follows UKRI rules and institutional requirements; typically the host organisation must be UK-based and the fellow will be employed by that organisation. Check the full guidance and speak to your research office for details.

Q: Can an AI specialist be a co-investigator? A: Yes—bringing AI expertise onto the team is essential for credibility. The PI should remain the domain lead, but a senior AI co-investigator (from within the UK or a partnering institution) strengthens the technical plan.

Q: What proportion of the budget can go to salaries versus equipment? A: There’s no fixed split, but reviewers expect a balanced budget that funds people to build and maintain systems, plus compute and storage. Justify everything in the budget notes.

Q: Are international partners allowed? A: International collaborations are possible, but the funding flows to UK-based host organisations. Explain any non-UK roles carefully and show how they add unique value.

Q: Will I get feedback if my outline wasn’t accepted? A: UKRI typically provides feedback on outlines. If you were not invited, read the feedback carefully and consider reapplying in a future call or partnering with an AI group to strengthen your next outline.

Q: Can I submit more than one application? A: Check the specific call rules. Typically you can only be named as PI on one proposal in a single scheme; you can be a co-investigator on others. Confirm with the programme contact.

Q: How risky can the project be technically? A: Reviewers are comfortable with moderate technical risk if the team has mitigation strategies. High-risk pure ML research without domain grounding is out of scope.

How to Apply / Get Started

If you’ve received an invitation to submit a full application, treat the invitation as a start pistol. Mobilise your institution’s research office, assemble your technical team, and set internal milestones to finish drafts well before 24 February 2026 at 16:00. Confirm with your finance office how FEC will be calculated and ensure they understand the 80% UKRI contribution and 20% institutional match.

Ready to apply? Visit the official opportunity page and follow the guidance: https://www.ukri.org/opportunity/turing-ai-pioneer-interdisciplinary-fellowships-outline-applications/

If you need clarification, contact the programme support team:

Final practical checklist before submission:

  • Have you included a clear team structure with named AI leads?
  • Does your budget show FEC and the 20% institutional gap?
  • Is there a data governance work package with timelines for approvals?
  • Are letters of support specific and costed where relevant?
  • Have you shown how outcomes will be maintained post-funding?

If you can answer “yes” to these questions and you’ve been invited, you’re in a strong starting position. This fellowship is demanding, but for domain experts who want to embed serious AI capability into their research, it can be transformative for your group and institution—if you plan it like you mean to keep it.